"AI won't fix bad decisions. It will just make them worse, faster."


Amit Kohli has spent 20 years working at the sharp end of data and technology, across the UN, charities, international NGOs, governments and the private sector. He is a prolific speaker in the data-for-good space and brings a rare combination of strategic thinking and real-world experience of messy data problems to his work with the charity and non-profit sector.
He has described 2026 as his "AI-first year." He also thinks most of the sector is not ready for AI. Both things are true at the same time, and that tension is exactly why we wanted to talk to him ahead of his session at the Hart Square TechSmart AI Masterclass this April.
So how ready is the sector, honestly?
This is where Amit starts, because he thinks it is the question most organisations are not asking clearly enough.
Research from CAST suggests around 70% of charities are already using AI in some form, while fewer than half have a policy in place. MemberWise's digital benchmarking data from the membership sector tells a similar story, with the policy gap even more stark. That means a significant proportion of organisations are using AI without any agreed framework for how, why, or by whom.
For Amit, those numbers are not just a governance problem. They are a signal that most organisations have skipped a more fundamental question: do we actually know how we make decisions?
"AI is not going to fix bad decision-making. It is going to accelerate it." When a human makes a decision that is well-intentioned but wrong, giving them a tool that makes them more efficient does not improve the decision. It compounds it. And without a corrective cycle to check whether the thinking is sound in the first place, the mistakes multiply.
His first questions when he talks to any organisation about AI are always the same: do you have a policy? Does your board understand and endorse it? Have you asked your staff what they are already using? And have you actually asked them how they feel about it?
Most organisations, he finds, have not done any of those things yet.
But budgets are tight and the pressure to act is real. What should NFP leaders be weighing up?
This is where Amit is deliberately different from a lot of voices in this space. He is not telling organisations to move faster. He wants them to think more carefully before they commit.
There are three risks he believes the sector is underestimating. The first is financial: the large AI foundation companies are losing significant money, and nobody has a clear answer yet for how they will make it back. For any organisation making substantial investment decisions based on current pricing and capability, that is a real and largely unexamined exposure.
The second is regulatory. The direction of travel is still genuinely unclear. If you have built AI-dependent processes and regulation shifts significantly, you could find yourself seriously exposed.
The third is the point where politics and technology collide, and this is where the risk profile becomes harder to map. It is not just reputational, though that matters enormously for charities and membership bodies whose stakeholders hold them to a higher standard. It also touches data security, competitive advantage and organisational integrity. Understanding not just what AI tools do, but who owns them, who funds them and what interests sit behind them, is becoming a necessary part of any serious AI decision. None of this means standing still. It means being careful, having a contingency plan, and in the meantime identifying the win-wins that carry lower risk.
So what does a sensible approach look like in practice?
Start with your people, not your tools. Survey your staff to understand what they are already using. Build policy collaboratively rather than imposing it from above. And if someone is not following the policy, the answer is not punishment. It is curiosity. Why not? What is not working? What would need to change? The no-blame culture can do a lot of work here.
People are using AI right now because they think it helps them do their jobs. That instinct is not the problem. The organisations that will get this right are the ones that channel it rather than suppress it.
Amit is also clear that AI working well looks very different from AI used transactionally. Most people, he observes, treat AI as a series of one-off interactions: help me with this document, now help me with this email, done. There is no continuity, no context, no learning. That is leaving most of the value on the table.
The organisations and individuals getting the most from AI are treating it exactly as you would a staff member, because that is what it is: an entity that learns and gets better if you give it the chance to. That means giving it real context about who you are, what you are trying to achieve, what your values are. And it means closing the loop after every interaction: what did the AI get wrong? What did the human get wrong? What can we do to get it right next time? What did we think was going to happen, and what actually did? Reviewing outputs, iterating, and thinking of it less as a search engine and more as something you actively develop over time.
What is the one thing you want people to walk away thinking about differently after your session?
He does not hesitate: "How do I make decisions?"
Not which AI tools to use. Not what the strategy should be. The more fundamental question: when my organisation makes a decision, how does that actually happen? Is it documented? Is it examined? Is it honest about what it does and does not take into account?
Get that right, and AI has genuine potential to help you do more of what matters. Get it wrong, and no amount of technology will save you.