AI-READINESS – Three pillars for de-risking AI success
- Aim Ltd

- 4 hours ago
- 6 min read

By Steve Ackland, CEO of Aim Ltd
This article is extracted from an Aim presentation entitled “How do we get ready for AI-readiness?” – delivered at a RegTech Convention in 2025 for international business leaders
AI is currently smack bang in the middle of a hype cycle that nearly every new IT tech development goes through.
AI is the latest shiny new toy with so much potential, seized upon by business users as a welcome solution to their issues and by organisation leaders with the mindset of ‘we need to do something or will get left behind’. Everybody wants to use it in some capacity to reduce the overhead of wading through increasing mountains of digital documentation/records, releasing people, creating both value and insight.
The problem though, that has been creeping in over the past months/years is confidence and trust in AI agent results – for example are they accurate, are they repeatable (do we get the same answer by using the same prompt) and how do we stop them hallucinating (making up answers if agents can’t find any suitable source material)? Plus, there is risk that many of the self-build tools and point solutions that have proliferated offering AI capability, have been released publicly and quickly on the wave of this hype and will unlikely pass an organisation’s enterprise-grade criteria.
AI Accuracy
So, what is the story on current AI accuracy?
The technology will undoubtedly get better in the coming months and years, but for those who currently use AI agents will have experienced accuracy and hallucination problems. And there have been recent high-profile examples in the news of leading global consultancies being caught out big time because key reports commissioned by customers were delivered by AI containing all sorts of errors and madeup information source references. Each time this happens it creates a degree of disillusionment and reduces trust in the technology. Although in the case of the consultancies, oversight was due we understand to inadequate quality data, misplaced confidence and over-inflated expectations of its AI agents. And clearly there was zero post-validation governance of outputs. When such reports are billed with six or even seven figure fees, customers are understandably unhappy.
As someone with a maths, science and risk management background who has brought that knowledge into tech leadership, I have been working with and training neural networks (the foundations of AI agents) for almost 20 years. Although the sophistication has of course improved immeasurably over that time, the fundamentals of how artificial intelligence works and machines learn, remain largely the same. Therefore, those embarking on the journey should be aware how neural networks and their algorithms operate so they can understand the strengths and weaknesses of the technology, ensuring they set realistic expectations and put in place suitably adapted governance guardrails.
AI Agents: Confidence and Trust
It is important to understand that neural networks operate on both stochastic and deterministic principles. Stochastic approach enables ‘hill walking’ analysis of data to creatively find statistically optimal solutions superior to just ‘local’ solutions. Whilst a deterministic approach aims to ensure that the same outputs are always delivered from the same inputs. AI agents try to balance both approaches with degrees of success currently (this will improve over the course of time) – users want creativity but at the same time want predictability - so organisations need to be aware of the constraints and have the necessary governance controls in place - within and outside the workings of the AI agent.
As mentioned, confidence, trust and inflated expectations avoidance are key – and indeed should be leading metrics on the dashboard when assessing success of AI agents. But as I and other commentators on this subject matter speak about regularly, it doesn’t start and end with the AI agents. Anyone who thinks it does will be eternally disappointed. To avoid ‘garbage in garbage out’ results, it is critical that foundational work is first completed to help set realistic goals and remove over-confidence and exaggerated expectations. This foundational work we call AI-readiness and as suggested previously was a key factor missing from the consultancies example.
So, how do we address AI-readiness? Well, we have identified three fundamental pillars to help de-risk AI-readiness and leverage far greater confidence in the outcomes from the use of the technology - now and in the future.
First Pillar: Data Governance
The first pillar is data governance, the leading critical foundational work. Gartner has suggested that in 2025 almost two thirds of organisations recognised their data was not AI-ready. Therefore, putting in place a data governance framework managing this most important resource is vital for supporting not only the use of AI but also for good data-led business management.
If confidence and trust are lost due to data not being AI-ready, it makes it difficult or impossible for an organisation to deploy AI agents operationally and runs the risk of bad outcomes. Which in turn reduces the return on investment from AI projects or in some cases stops them altogether. Industry analysts estimate that around 52% of all AI projects each year are cancelled or rated as outright failures and a high proportion of those are due to data issues.
Second Pillar: System Governance
The second pillar is system governance. No surprises - an organisation must not overlook the importance of business and systems analysis when understanding process, user audience and expected outcomes from AI. But with new and exciting AI they often do – business users throwing caution to the wind in the rush to use it, thinking that “it will just work”, without seeking advice first from their CIO, CTO or Head of Data.
These methodologies and tools have stood the test of time for ensuring IT solutions are selected/designed, built and implemented risk-free into an organisation.
As developers of the DataEstate platform we have invested heavily in functionality and features but as importantly in non-functional requirements, to ensure they satisfy an organisation’s enterprise-grade criteria. Aspects such as cyber security/privacy, scalability, reliability, sustainability, hosting infrastructure, capacity/speed and anti-bias/hallucination controls. As well as determining the accuracy thresholds of results using various algorithmic methods - the machine equivalent of a second opinion - before results are passed for human validation.
One point to make leading on from potential risks from self-build and point solution AI agents and the importance of non-functional design, is that ‘buy not build’ is generally the most valid approach. This is because a reputable vendor will offer enterprise-grade AI agent platforms and will have already assured the security and non-functionality by design, because their software will be regularly audited for vulnerabilities by technical authorities such as the National Cyber Security Centre (NCSC). And by extension, consideration should be made where possible of enterprise framework platforms that offer a range of AI capabilities and/or can orchestrate native and non-native tools to provide a single go-to AI stack, democratising usage cross the organisation. An approach for CTOs, CIOs or enterprise architects to advise upon.
Third Pillar: Managing AI Agents like Employees
The third pillar is how we manage AI agents. This pillar will definitely be one up for debate and give rise to differing opinions. But our view is that if we are expecting machines to increasingly think like people, then maybe we should start managing them to some extent in a similar way. For example, monitoring confidence in their abilities, accuracy and reliability - like we would a human employee. AI agents could be onboarded in a similar way to their human counterparts - assigned a line manager, ‘join’ on a 3-month probation period, given objectives that the line manger reviews regularly. In addition, AI agents could be given a training and development plan and even get ‘promoted’ (eg operational to strategic status) if they prove themselves. I guess the Human Resources department would then become the Human Resources (HR) and AI Resources (AR) department!
Interestingly recent reports are that some of the larger consultancies do now categorise AI agents as ‘employees’ and include them in their headcount. So perhaps this practical response to how AI agents are managed suggests that for some organisations at least, that debate has already been decided!
If you would like to read our whitepaper on creating a data governance framework for data-led organisations without any major reorganisation, contact us here for a free copy.




Comments