How should leaders make their organisations AI-ready? This is a question I have been asked often, whether in my current role as a director of an AI consultancy firm, MantisNLP, or in my previous role as CEO of a global data and AI non-profit, data.org, and even before that in the pre-Cambrian days before ChatGPT, when I was the Head of Wellcome Data Labs, an AI team embedded in one of the world’s biggest funding organisations.
For me, there are five rules you need to follow to successfully introduce AI into your organisation in a way that actually drives value, rather than hype, and none of them is about the shiny AI products themselves:
- Focus on Digital and Data before you focus on AI
- Be flexible by delegating authority, but set clear guardrails
- Consider social elements of AI as much as you do the technical
- Don’t make ethics an afterthought
- Make sure people get the training they need
Let’s consider them in turn.
Rule 1- Get your foundations in order
The first rule of making your business AI-ready is – maybe counterintuitively! – not to focus too much and too early on the AI products themselves. AI is changing so fast that over-focusing on the technology means inevitably that you will always be a step or two behind. Instead, the first things to focus on are your digital and data basics.
Have you set up your business processes in the way that enables data to be collected at high quality, that enables data to flow and support decision-making? Have you set up your digital infrastructure for effectiveness and efficiency and replaced analogue blockages like, for example, key systems not having an integrated digital view of all your customer touch points, or, indeed an integrated view of your own staff?
AI is only as good as the data it is trained on and the systems that make that data available to it in a timely manner. Therefore, digital and data transformation, if done correctly, will help you make the most of AI, whatever AI product you choose.
Rule 2 – Flexibility is key
The second rule of making your business AI-ready is flexibility. You need to make sure that the frontline teams who are introducing AI products, whether it is in the back office to make your HR or finance or other processes more efficient, or whether it is right at the heart of your delivery in the front of shop dealing with your customers, you need those teams to be flexible in how they implement AI and take account of its fast-changing reality.
To be flexible, those teams dealing with AI need to feel supported and empowered by their managers, the executives, and ultimately the Board. What that means in practice is the Board needs to set a clear direction and guardrails for the implementation of AI, what can and cannot be done. That needs to reflect the organisation’s strategy and culture. And within the space where AI can be introduced, the Board and the executive team needs to delegate authority. The closer the authority is delegated to those people who are making day-to-day decisions around technology, the more flexible and nimble those teams can be, the more likely your AI projects will be successful.
This is easier said than done and usually takes strong internal AI champions in middle to upper management level who can speak both up to the Board and down to the front-line staff, allowing the build up of trust to try things and generate momentum, while sticking within the pre-agreed guardrails.
Rule 3 – AI is a sociotechnology
The third rule in making your organisation AI-ready is to remember that AI is not just a technology. It’s a sociotechnology. The social element is encoded in the data AI is trained on. The social element is also encoded in the outputs of AI, how it changes business processes and how people interact with it. And that element is as important as the technical aspects of any AI product.
That means it is important to understand how humans react to AI products. For example, if you introduce an AI product in a highly expert organization – like, to take an example from Mantis’ own set of case studies, if you’re introducing AI into a grant-making organization like Wellcome, FIND or the Heritage Lottery Trust, which use experts to assess grant and funding applications – you need to understand that the experts within your organization can be your greatest allies for introduction of AI or the greatest opponents.
If you do not include the experts, to both design the AI implementation and then to work alongside the AI, as a human in the loop, making sure that its outputs are correct, reducing hallucinations, improving quality all the time, then the project is likely not to be successful. Excluded, internal staff will feel threatened and disrespected rather than empowered by the technology. You will simply not have the buy-in from those teams to implement successfully, every small issue will be highlighted, every mistake by the AI will be a reason to stop the project, instead of seeing it as a training opportunity that will improve the final product.
So, treating AI as a sociotechnology, accounting for legitimate human interests, worries and concerns from the beginning through, for example, doing extensive user research and setting up interdisciplinary AI governance groups with members drawn from different parts of the organization, will set you up for success and turn potential opponents and critics into internal champions for your project.
Rule 4 – Responsible Deployment
The fourth thing to consider to make your organization and business AI-ready is that you must implement AI ethically and responsibly. This is not just a question of morality. It’s business critical for organizations, in an environment where any failure, such as a security breach, mishandled data, or bad advice given to your customer via a rogue chatbot, could cause regulatory censure, or be picked up by media and negatively affect your organization’s reputation. That is aside from the worse outcome that poorly executed AI could cause harm to your customers or staff, the very people it is your organisation’s mission to serve.
For this reason, thinking through what could go wrong ahead of implementing any AI product is hugely important. With Mantis, we have developed a whole methodology for AI ethics reviews that is typically conducted by a set of business stakeholders, including the technologists, in a mixed skills team, the goal of which is to go through and understand the intended use cases for the AI product under consideration and also areas where it could go wrong.
Typically, we ask: what are the edge cases that could occur where the AI product is under stress and doesn’t perform as well as it should? What are the potential misuses of it? And what are the potential abuses of it, where customers are using the product intentionally in the way it’s not designed to be used. From the start of Mantis, we recognised that encoding ethics and responsible use from the get-go in every AI implementation is now a must-have for any organization. You can read more about our methodology here.
Rule 5 – Upskilling at all levels
And the final fifth rule for making you organization AI-ready is that a successful AI implementation takes cooperation and know-how from all levels of the organization. It is really important that starting from the very top, the Board and the executive team are trained and upskilled in asking the right questions and setting the correct strategy, including those guardrails we mentioned earlier.
In the middle management, where budgets are likely to be expended on the purchase of AI products, it’s important that middle managers have the skill set to understand the differences between buying commoditized AI or building bespoke, the pluses and minuses of each, so they can effectively deal with potential consultants and other contractors that they will be hiring.
And finally, at the implementation level, the technologists in your team need to have the skills to assess whether the AI products being introduced are doing what they are supposed to be doing. So, the ability to audit the output of AI products, the ability to build in checks for data security, for hallucinations, for bias, are all key new required skill sets.
Therefore, at every layer of the organization, you need to have a training plan and skills development plan for the technologists, for the managers, and for the executives and the Board. Only when each is upskilled appropriately, will AI be used effectively in that organization.
These are my five key rules for leaders who want to develop an AI-ready business or organization. I would love to hear what you would add, remove or change, so please get in touch!