Governance and Ethical Considerations for AI in Nonprofits
Event Archive
As nonprofits begin to adopt AI, upholding ethical standards in transparency, accountability, and fairness is crucial.
AI can unlock new possibilities for nonprofits, improving operational efficiency and scaling impact, but with this power comes the responsibility to ensure ethical oversight and sound governance.
In our recent webinar we were joined by Gous Uddin, Head of Consulting for the NFP Sector at Kerv Digital, as he shared valuable insights from his experience working with a leading international humanitarian charity.
Throughout the session, delegates had the chance to raise questions with Gous and Hart Square’s Alan Perestrello. As these questions reflected common challenges across the sector we’ve compiled below with the responses from Gous and Alan, providing a resource for other organisations exploring this same topic.
Webinar Q&A
[toggles style=”default” accordion=”true” accordion_starting_functionality=”default” border_radius=”10px”][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”1. What about the higher risk of cyber-attacks with AI adoption?”]Alan: Yes. This is an issue and does need to be taken into account. I would put it in the same space as Data & Security. Audit and Controls do need to be in place for assessing and mitigating this issue and it should be aligned with existing IT Security policy items.
Gous: Would also seek expert advice and explore penetration testing where data is transferred between source and destination systems, i.e. integrations. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”2. Our board don’t think AI needs particular attention, how do I get buy in to get policy or agreement in place of how we might use it?”]Alan: You can start by sending them a link to the recording and focus their attention on the Risks section and make the point that staff will start using AI in some for or fashion so to a degree the policy work should already be in place to guide and manage this. Doing nothing is not really an option. If the organisation has decided to not do anything formally / officially with AI, the policy needs to be in place stating that and what the consequences are of using these tools without approval.
Gous: Conduct a survey on a range of diverse stakeholders (internal and external) to gather views and opinions. This may encourage senior stakeholders to listen and pay attention. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”3. I would add another option there of license cost as a concern – eg Co Pilot”]Alan: Agreed. Licenses products are costly but in some respects these are at least within your control – i.e. you can manage unintended use by simply not giving out the licenses. I am more concerned by the unlicensed services that make it harder to manage.
Gous: Costs are always relative to ROI. If the ROI can be measured, then it makes the cost easier to accept. Remember ROI doesn’t have to always be a monetary value, it can be qualitative in nature, i.e. better supporter/member experience. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”4. Are there extra controls needed for sensitive data – using AI in the healthcare sector?”]Alan: Yes, very much so. We did not stress special category data specifically, but this would fall into that category and is a great example of having to take extra care to manage how data is used with AI. There are ways to mitigate and manage this, but they start with an audit of data, systems allowed and systems prohibited and then what those allowed systems can have access to. Defining it and making it clear in the policy with clear consequences for non-adherence to the policy is the best approach.
Gous: I would add that unstructured data such as documents should be handled with additional care as they lack field-level security controls as you would have with structured data held in tables. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”5. Could you share some examples of really useful ways in which Membership bodies have used AI for membership engagement or increased efficiencies? If not in this webinar then another one. Thank you.”]Alan: This is a topic we will cover in the next session on AI on the 23rd January. Register here.
[/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”6. How can the benefits of AI be presented to organisations that have a very ‘human’ centred mission – for example mental health charities – and can AI support these organisations effectively?”]Alan: I think this depends on the AI enabled service you are looking to implement but overarching advice would be to be transparent about its use, encourage feedback and continue to communicate your initiatives to stakeholders / service users. Do think about giving them the option to not use AI enabled services if it’s possible (may not be).
Gous: I would add to use democratic processes and decision by panel to test/pilot different applications of AI. That way you have consulted with those affected and made decisions that are shared and widely accepted, and crucially don’t conflict with your organisation’s mission and values. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”7. One of our ethical challenges is considering the energy / water consumption of AI and its impact on the environment (this conflicts with our organisational values). Are there ways to adopt AI use whilst keeping it ‘green’? Or is this more something we’ve got to wait for the tech to evolve to be more efficient?”]Alan: Very valid point. I think this is something that should be part of the broader AI policy in the sense that there should be alignment across your policies so if your environmental policy says you have a commitment to lowering carbon footprint, then the extended use of AI could affect that. An AI platform or provider’s environmental credentials are then part of the inclusion or exclusion process for approved providers so there is a link to the AI policy in that sense.
Note that an aspect of AI’s increased carbon footprint is the sheer scale of adoption and processing requirement. As these solutions become more mainstream, their broader carbon footprint increases. I think it would be very hard for an organisation to assess their individual net carbon impact through the use of AI but I do think it’s a viable variable to assess and manage.
Gous: To add – this is why encouraging and persuading users to adopt know tools (rather than ones they’ve found themselves) is preferred. It is possible to interact with these organisations to acquire information about their environmental impact, but also their roadmap and direction of travel. Trying to stop users from experimenting with AI tools could lead to an underground usage culture which will be far less transparent. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”8. How did you manage the stopping use of those that were already in place without disrupting some of the processes/work/projects that were using these?”]Alan: I think this requires a policy that acknowledges the fact that AI may already be in case and that may change through regular review points. Like any policy, you should adapt it and allow exceptions where warranted. If the benefits of persisting with your current initiatives are validated, then there are no issues with either creating your policy around what you have or saying that what you are doing could be reviewed and changed.
Having a policy is a good baseline regardless. And accept that it will have to adapt and evolve over time. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”9. What approach to take when management have so many other priorities (e.g. governance/compliance) that they are not investigating possibilities of how it could enhance the business?”]Alan: I think each organisation is different, but Time / Resource challenges are always an issue, especially in the NFP space where the game is “do more with less” and that does not seem to be getting any easier.
However, I think putting a bit of time aside to address this would pay dividends later in the process. The logic behind this is that is becoming an external factor i.e. one that you are not in control of but does have an impact on your organisation. I say that as I think the perception is “I don’t have time to look at this so I’m not going to look at is and if I’m not looking at it, it will not happen”.
As these tools are so easy to use, being heavily promoted and seen as the future, staff are either consciously or unconsciously using these tools so the lack of an agreed policy opens your organisation up to the challenges.
One approach could be to have a simple policy that states that the use of AI tools are prohibited, and that the policy is under review and until the policy is updated and distributed the formal policy is not to use them. The consequence here could be that staff react negatively to this and it forced a review of the policy. But in some respects, knowing that and being then forced to deal with it will help illustrate the scale of what you need to do.
Keep it simple but do something to start addressing it in some way would be by suggestion.
Gous: I’d add that many of these AI tool, such as Copilot from Microsoft deliberately emphasise the tool is not the ‘pilot’, therefore any policy should make clear that the ultimate accountability lies with the user, not the tool they have used to help author a document/email, for example. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”10. If there is one thing you would recommend doing first and urgently in this area for a medium organisation with no internal IT what would it be?”]Alan: It would look to do a simple policy to lock things down a bit which per the above question and answer, allows you to start looking at it. I think this is as much a strategic and operational topic as it could be for IT. In fact, I would say to all organisations of all shapes and sizes, don’t see this as an IT thing; it’s a strategic thing and you need to have technology / IT involved but they respond and input to the topic rather than drive it.
It’s worth saying that Hart Square are here to help with situations like this so please do reach out if you need some help and are not sure where to start.
Gous: I would add that many non-profits depend heavily on volunteers and philanthropy. Use this to your advantage. Form a self-organised AI group with people who aren’t necessarily paid to do it, rather they have a passion. Think of the various societies that exist in universities where people volunteer to assist. Also, see our pro-bono services, there are many IT organisations such as Hart Square and Kerv Digital who would happily provide some light touch advisory services. Feel free to get in touch with me! [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”11. How do you stop the business from wanting to roll out new AI tools without a policy in place….they want to implement tools without putting the governance in place”]Alan: I think this comes down to having a policy in place that states that there are consequences to non-compliance. I think the risks associated (not ignoring the potential rewards) associated with unmanaged AI initiatives justify the metaphorical stick.
Setting the baseline and using that to drive a more managed process around AI adoption is far better than allowing unmanaged activity to happen and then potentially having to deal with the issues / risks is preferable. From an analogy perspective: Before the horse can bolt, invest a few hours initially in ensuring the lock on the barn door works and that you know who has the keys and what the process is for putting the horses to stable is preferable to assuming the horses will do this themselves in a responsible way.
Apologies to anyone with even a vague knowledge of horses for what will invariably be a poor reflection on what it takes to actually look after horses. I am clearly a city boy.
Gous: I’d add that most tools being rolled out require some form of training to be delivered and usually a licence or access key to be granted. These are the opportunities to bake in a policy (even a light one) in the process of giving access or delivering training. [/toggle][toggle color=”Default” heading_tag=”h3″ heading_tag_functionality=”default” title=”12. In this hyper-fast-moving field, many orgs will be waiting for things to settle down (to plateau). What factors should we consider when deciding the right time to engage? How do we know we’re at the plateau?”]Alan: I think this a very valid point and I can reference the first of our webinars on this topic around AI Strategy, specifically the AI Hype Cycle. In my opinion, we have now crested the Peak of Inflated Expectation (hype) and we are now heading to the Trough of Disillusionment which will be followed by the Slope of Enlightenment.
It’s easy to say, “well things are quite fluid so we are going hang back and watch to see how this plays out” and that is a valid option but do consider that with any new imitative you are going to have to go through the lessons of what works and what does not work to gain an actual positive benefit. Unfortunately, this human nature in that we learn from our mistakes. Making mistakes is actually a good thing, as you gain insights and evolve faster; provided you have the ability to assess your mistakes and learn from them.
I’m not promoting organisations go out and invest vast amounts of resources in AI initiatives but I do advocate taking a balanced risk approach in trying things so that you gain the insights of failure. In the context of health and safety by analogy, it’s akin to saying there are risks out there. Avoiding risk is prudent but assuming you can ignore it if you don’t what the risks are is ignorant.
I think now is a good time to lay the foundations, put in protections and structures to manage the risk and take managed steps towards developing a response to AI. in doing so, you stay ahead of the curve and learn the lessons along the way.
Gous: I’d add that the fact that major brands like Google and Microsoft have created deployable product (in contract to the largely theoretical use of AI by computer scientists), is a telling sign of AI’s maturing as products that can be bought and used. Waiting for AI to plateau could be a long wait, and meanwhile benefits could be missed out on by those who are not yet adopting it in some form. [/toggle][/toggles]
Fill in the form below to gain access to the webinar video
JTNDc2NyaXB0JTIwY2hhcnNldCUzRCUyMnV0Zi04JTIyJTIwdHlwZSUzRCUyMnRleHQlMkZqYXZhc2NyaXB0JTIyJTIwc3JjJTNEJTIyJTJGJTJGanMtZXUxLmhzZm9ybXMubmV0JTJGZm9ybXMlMkZlbWJlZCUyRnYyLmpzJTIyJTNFJTNDJTJGc2NyaXB0JTNFJTBBJTNDc2NyaXB0JTNFJTBBJTIwJTIwaGJzcHQuZm9ybXMuY3JlYXRlJTI4JTdCJTBBJTIwJTIwJTIwJTIwcG9ydGFsSWQlM0ElMjAlMjIxNDQwNDExODElMjIlMkMlMEElMjAlMjAlMjAlMjBmb3JtSWQlM0ElMjAlMjJkNWYxNmU4Yy1mYTIxLTQyYzItOTMxMS0wZjc2NTM1MDFlNGYlMjIlMEElMjAlMjAlN0QlMjklM0IlMEElM0MlMkZzY3JpcHQlM0U=