AI Webinar Series 1: Key Takeaways
Many non-profits and membership organisations think AI adoption is a future decision. The truth? AI is already here, quietly influencing workflows, shaping decisions and introducing risks, often without leadership even realising it.
Teams are quietly experimenting with tools like chatbots to draft emails, summarise reports, and generate ideas This "shadow AI" can boost productivity, but without oversight it also introduces real risk to member data, compliance, and trust.
In Part 1 of our AI Series: Responsible AI from Day One: Governance Lessons for the Sector, one theme became clear: the biggest challenge organisations face today isn’t just figuring out where and how to use AI effectively; it’s controlling how it’s already being used.
AI Adoption Is Moving Faster Than Governance
Across the sector, AI adoption is growing fast. According to the Charity Digital Skills Report, 76% of charities are already using AI, yet only 25% have a governance policy in place. For membership bodies, the gap is even starker: 26% are using AI, with just 6% introducing governance.
Without basic guardrails, like clear rules on tools, data use and review processes, AI can quietly embed itself into day-to-day operations and risk damaged trust, compromise privacy, expose unintentional bias and create costly errors.
Responsible AI Unlocks Innovation
Some leaders fear governance will slow teams down and creates unnecessary red tape. In reality, practical guardrails give teams the confidence to experiment, because there are clear rules on what's permitted, what needs sign-off and what's off-limits.
This also avoids taking two steps forward and one step back when you realise you need to redo or re-think something. That actually slows things down more and erodes confidence.
The organisations that see the most success start small: low-risk use cases like summarising reports or drafting routine communications, proving value quickly, and scaling what works. Good governance doesn't block innovation, it makes it sustainable.
AI Doesn’t Fail Loudly, It Fails Convincingly
AI systems generate outputs based on probability, not always verified facts. That means they can produce responses that sound authoritative but could be wrong or lack the context that a human would naturally understand. For organisations communicating with members or providing advice to service users, even small errors can erode trust. Human oversight and clear review processes aren’t optional, they’re essential. To manifestly involve humans, clear guidelines and guardrails are needed to know who’s responsible for what, why and when.
Unchecked AI may produce the right answer 90% of the time, but it’s the 10% of mistakes that can erode trust, damage relationships, or create costly consequences. As we use AI for more and more things, that exposure surface amplifies so having good governance in place will pay dividends in the future.
To get an idea of how Shadow AI is currently being used and to help inform the use cases, an “AI amnesty” can be helpful: ask staff where AI is already in use and where they want to experiment. This reveals hidden opportunities and hidden risks that leadership may not have been aware of.
Governance Should Be Practical, Not Theoretical
Effective AI governance isn’t always about waiting for a perfect policy. I think this aspect holds people back from taking the initial steps. Even small steps make a big difference:
- Appoint an AI lead to coordinate use and answer questions.
- Create a short AI charter defining permitted and restricted uses.
- Introduce board-level oversight, even with a small executive group.
- Provide mandatory staff training and regular refreshers.
- Review AI use and policy at least quarterly.
The old adage of “How do you eat and elephant? One bite at a time” speaks volumes here. Start with something like putting in an AI Charter is a good place to begin. From there, things will be easier to take the next step and your plan will develop around that.
The Bottom Line
AI adoption isn’t a future challenge, it’s already here, quietly shaping workflows and decisions. Putting AI Governance in place helps you take control of it. The real opportunity is pairing experimentation with practical governance.
In the age of AI, success won’t go to the fastest adopter, it will go to the organisations that use it responsibly, safely, and strategically.
Join us for Part 2: AI in Practice: Building the Foundations to Get Started Safely (26 March) where we explore how NFPs and membership organisations can deliver early value from AI using low-risk, practical use cases, building confidence while keeping oversight intact.