Generative AI and Ripple Effect
Generative AI is a type of artificial intelligence that can create content such as text, images, audio and video based on human input. Generative AI can be a powerful tool to enhance productivity, creativity and innovation, but it also poses ethical and security risks that need to be managed. We have developed the below principles and practices to help us use generative AI in a responsible and secure manner.
What are we doing?
We are exploring the use of generative AI in all teams across the organisation. We are looking at where it can be used in back-office tasks, specific processes and where it can give us efficiency opportunities. We are doing this as part of a carefully monitored three-month trial, after which we will evaluate the results and decide on our next steps. We are excited to see how generative AI can help us improve our productivity and quality of work and we are committed to following the best practices and principles below.
Principles
There are risks in the use of any new technology, and we are aware of many that come with the use of AI, including the unknown. To help us mitigate these we have put together the following principles and practices. These will guide us in our usage and ensure we keep to our organisational values at all times.
We will adhere to the following principles when using AI in our work:
- Transparent: We are open and honest about our use of AI and the data we collect, process and share. We communicate clearly and accessibly about the purpose, function and limitations of the AI systems we use.
- Fairness: We will strive to ensure that our use of AI systems is done fairly and without bias. We will continuously monitor our usage of AI systems to ensure they do not perpetuate or exacerbate any unfairness.
- Beneficial: We use AI to advance our charitable objectives and to create positive social and environmental impact. We assess the potential benefits and risks of using AI in each context and we avoid using AI for purposes that are harmful, discriminatory or contrary to our values
- People-centred: We use AI to augment the capabilities of people and to enhance dignity, not to replace or harm anyone. We ensure that people have meaningful control over the AI systems we use and that they can provide feedback and seek redress if needed.
Practices
We implement the following practices to operationalize our principles:
- Human Oversight: We will ensure that human oversight is in place to monitor and evaluate the performance and outputs of the AI systems we use.
- Data governance: We establish clear policies and procedures for the collection, storage, processing and sharing of data that we use for AI purposes.
- AI use: We train our staff and volunteers on how to use AI systems effectively and ethically. We provide them with clear guidelines and tools to help them make informed and responsible decisions when using AI systems.
- AI evaluation: We conduct regular audits and reviews of the AI systems we use to ensure their quality, reliability, fairness and safety.
Review
This policy will be reviewed periodically to ensure that it reflects our current use of generative AI and the best practices in the field. We welcome suggestions and comments on how we can improve this and all our policy and practices.
Want to hear good news stories from Africa, get involved in fantastic fundraising and be part of exciting events? Fill out your details below and we will keep you updated by email.