I’ve been inspired by a post from Walter Haydock to share my “learning by doing” journey. I usually do my things in the background, but here goes. I’ll keep it brief.
I’ve always been a hands-on learner. I was promoted relatively early in my career to leadership positions, ultimately becoming CEO of a small ($10M) market research. Along the promotion path, I actively “did” most of the roles in my company. In my leadership role, I had a great understanding (though not full competence) or most roles in the company, which was really valuable.
Fast forward, that learning by doing is still in me. In 2H 2023, I started expanding beyond my cyber security role to privacy, and then AI. I started with a focus on AI risk and governance, leveraging the cyber and compliance background.
But I’m not satisfied just knowing the surface level of AI. While I largely moved beyond my computer engineering background 30 years ago, I still need to “do”, not just “read”. So I quickly moved deeper than AI governance.
🔳How does one build these models?
🔳How do they work?
🔳What the heck is data wrangling and hyperparameters?
I found myself in IBM’s Watson Studio, actually building AI models. I built a couple prediction models, and a categorization model. Interestingly, I learned about IBM’s tools for GDPR and CCPA compliance. And applying Privacy Enhancing Technology, such as Differential Privacy and data anonymization. IBM has amazing training materials available for free, and access to the tools for “labs”.
I learned how feature engineering and data minimization go hand in hand. —what’s the minimum data set that I can use to get my target outcome? Doing so not only helps privacy, but increases my model efficiency.
Mind you, I’m not just reading about these, but actually doing it, using the tools on the data.
I used another tool called Stack-AI to build a handful of LLM-based models. Privacy-related models, mind you. I needed a relevant use-case. And dove into Microsoft CoPilot to understand the baseline “readiness” a company needs before even considering deployment.
Of course, various LLMs and image Gen AI apps are along for my ride. They’ve become daily companions.
Among my many lessons on this journey, it really drives home the recommendations us consultants often make about privacy-by-design. So much of the privacy controls related to AI cannot be applied after the fact. It has to be up-front.
So, I’m not an AI designer. That’s not the point. The point is to have hands-on knowledge and use that as my “school”. When I talk with clients about their target use-cases for AI, I know a lot about the data requirements, the model options, process, and of course, the AI governance, cyber security and privacy considerations. Let the journey continue…..