Developing Your AI Strategy for Non-Profits
Past event
Past event
Artificial Intelligence promises to revolutionise how we interact with technology. But with the revolution comes a whole host of questions for non-profits including:
Carolyn Brown, CIO of the British Medical Association (BMA) joined us to share the strategic approach BMA took to introducing AI into their organisation.
Throughout the webinar, delegates had the opportunity to put their questions to Carylon and knowing that many of these answers will be valuable to others in the NFP sector, we have complied and detailed the responses below.
If you are looking for more information on understanding how to develop an AI strategy for your non-profit, you can access the full webinar recording by completing the form on this page, or by contacting us at hello@hartsquare.co.uk.
Carolyn Brown: The environmental impact that results from Gen AI processing is ever increasing. The only way to mitigate is to ensure that alternative sources of energy are increasingly and progressively used and the application of AI itself is used to mitigate environmental impacts in new ways. From a societal perspective, we have a responsibility to question (and validate) to the extent possible, any “truths” generated by GenAI. Social media misinformation is a prime example of this.
Carolyn Brown: Each organisation has to OWN its responsible use of AI but for NFP organisations, the focus has to be on efficiencies that allow for a bigger return of value to the “customer” (in profit orgs this is also true but its more about generating higher profits). Sharing best practice and learnings with other NFPs is what we already do well, and this would apply to the learnings around responsible use of AI also. Share on LinkedIn and other engagement channels.
Carolyn Brown: There are many adoption models (these have to evolve as people learn) but standard models are offered via GenAI providers (see Google Adoption Framework or Microsoft which are easily available). Adoption models are also available for targeted contexts (e.g. Azure).
Carolyn Brown: Many vendors out there (tools for staff to use for Google, MS or other) and tools within apps (chatbots etc). Then there are models (that have intelligent algorithms) that are prebuilt and models that can be built from scratch.
Carolyn Brown: AI tools can tell the story in more versatile ways than the limited relational data reports we are used to. Much of this is exploratory and needs some targeted data analytics. Often you don’t know the question you want to ask until you see various “cuts” of the data and the story starts to unfold. Tools will reveal sources and sources must be validated where potential bias can evolve. Where sources cannot be validated, validity is not guaranteed. Within organisations, building out stories can start from small nuclei. Know your sources or have caveats.
Carolyn Brown: This generally boils down to the ethical use of AI balanced with the need for efficiencies that make the most from income for the benefit of the organisations “customers” – few recommended steps would be to create Charter for AI use that expresses the NFP’s intentions and responsibilities, provide awareness and links to learning tools, get staff to sign up to ethical and responsible use before they are able to use tools.
Carolyn Brown: Best way to approach this is to show people what the tools can do and get them to offer their use cases (get representatives from these lines of business). There are many possibilities but being specific to the end user context makes the tool more relevant.