Artificial Intelligence is a huge competitive advantage if done right.
It is also a two faced coin – there are risks to worry about.
What if “rogue” AI efforts inside your company cause major issues – either inside the company, or externally – even compliance issues? What if someone uses ChatGPT or another LLM and uploads company data without realising the possible risks of making proprietary data available to others? Worse – what if they don’t realise they may be violating privacy laws by doing so? (see here for an excellent article on GenAI)
Artificial Intelligence efforts inside any company need the development of a Responsible AI policy. This needs to be built with the help of the key stakeholders and created in a way that all employees understand it clearly, and have a way of using Artificial intelligence safely – to make business impact, or even to just make their work life more efficient.
A framework needs to be developed – often by the Chief Information Officer or Chief Data Officer. This framework needs to then be evangelised by the executives so all employees know how to access it, and what it means for them.
AI can make huge impacts – positive or negative. To enable positive impact and mitigate the risk of negative impact, the company’s executive team needs to declare that a Responsible AI framework will be created, clearly identify who will own this framework, and then properly fund and execute on the plan.
Feel free to ping me on some examples of how this may work.