Towards Responsible Humanitarian AI: Guidelines from Research and Practice

AI technologies bring significant opportunities and risks for principled humanitarian action. The breakneck speed of AI innovation has societies struggling to keep up ways to manage them responsibly, let alone humanitarian organizations trying to see if AI can be used safely, responsibly and in accordance with humanitarian mandates.

Based on Signpost AI’s experience with developing its own AI tool (see here), interviews conducted with expert stakeholders and the literature surveyed (see here), we present key considerations for the responsible development and uptake of AI tools in the humanitarian sector. 

This is a continually learning journey for Signpost. None of the following guidelines visualized below are cast in stone, they are the result of research, consultations and  conversations with experts, lessons, experiences and mistakes from the Signpost AI Information Assistant pilot, and deep reflection. They are also influenced by IRC Digital Standards, which provides guidance on best practices for creating, implementing, and managing digital solutions.  We present these to help others with their thinking around AI and so that they can learn from our case and make informed and responsible decisions about how it can be developed and deployed. This document is also an invitation to collaborate and contribute to the ongoing conversation about the ethical and practical implications of AI.

Start from Humanitarian Principles

It is imperative that key decisions about AI follow a set of core guidelines that incorporate humanitarian principles. Generative AI offers great promise for humanitarian efforts, but it is important to avoid a techno-positivist approach in its implementation. Instead, the fundamentals must be revisited. As with any prior approach to technology, we should ground the humanitarian principles of humanity, neutrality, impartiality and independence as the compass that guides our action. These are the key tenets upon which AI specific principles and commitments can be tacked on. 

This is what Signpost did. For the Signpost AI initiative, we started with our own adopted humanitarian principles and associated key standards and commitments such as “do no harm”, client-centered, and accountability. We reviewed existing AI ethical guidelines and principles from humanitarian organizations and the private sector and used a values driven approach; enhancing our humanitarian mandate with AI specific considerations, such as transparency.

We also thought about how this ethical and responsible AI framework will be operationalized in a practical manner in relation to development activities such as mapping opportunities and risks, creating technical standards and development tools, staffing the relevant skills and expertise, testing and evaluation for quality, governance and training AI literacy. 

It is also important to approach AI projects with humility, with a learning, pragmatic mindset. Using fast-moving technologies does not mean one must “move fast and break things”, it requires what Sarah Spencer calls “move slowly and test things.” Signpost’s approach is moving at a responsible pace aligned with our goal to achieve the highest burden of proof when it comes to safety, efficacy and privacy of the AI tools we create. 

Humanitarian fundamentals matter for a reason, irrespective of the technology. They work. And they keep our communities informed and safe. 

Our recommendations for grounding humanitarian principles in your AI efforts is as follows:

  1. As a humanitarian organization, the principle of humanity should be your starting point. Revisit your humanitarian mandate. This will be the foundation of your AI effort.

  2. Study AI specific ethical frameworks. Find what aspects can be added to your own ethical frameworks and principles. Consult with program officers, technical specialists and frontline responders to develop a combined Principles Document. This document should serve as your guiding light for key AI-related decisions, starting with the fundamental question of whether to employ it at all. 

  3. Translate AI concepts into the language of humanitarian action. Humanitarians should be able to discuss and reflect about AI in a grammar that makes sense to them. Visibilize humanitarian values and concerns within your technical solutions 

Identify and understand the Problem

AI discourse is replete with sensational reporting on AI’s technological industrial revolutions or doomerism that predicts the end of humanity. Speculative extrapolation of trend lines and overblown hype litters the landscape of AI discussions.

Signpost’s position was to cut through these existential conversations, drill down to a specific implementation use-case and explore in practical details the question of AI in a humanitarian setting. This practical AI thinking approach is not to discount AI future outcomes completely but to bring rigorous empirical evidence and learnings to the conversation.

The first step towards this practical thinking is to identify the problem the organization is trying to solve and understand its contours. Note that this precludes even thinking about AI beforehand. The starting point is not AI nor a technology, regardless of its promises. The starting point is the humanitarian problem that your organization is trying to solve. This matters because many solutions fail because they solve the wrong problem, ignore context or the needs of the people. Not understanding the problem risks creating solutions which may duplicate existing efforts, remain unused or even cause harm.

Ask yourself:  

  • What is the humanitarian problem you are facing? 

  • What is the shape of this problem? 

  • What is the need?

  • What is the ecosystem in which you are operating?

  • Who will use this tool? How will they use it?

Once these  facets of the problem are identified, ask if AI can address it ethically and safely without doing harm? If AI can address these issues, then under what conditions (when, where, and how and for whom) should it be deployed? If AI is identified as a solution, it is important to ensure that there is a shared understanding across organizational levels on what the technology can and cannot do.

Answers to these questions may not be obvious at the outset. This will require the additional understanding of the technological solution itself. How does AI work on a technical basis? What are its social, political, environmental and economic realities? Knowing your problem requires due diligence on potential solutions as well.

This problem identification exercise is not just responsible practice, it also has the benefit of justifying your AI use-case in a humanitarian high-stakes context. 

This preliminary exercise is key to a pragmatic, values-driven approach to AI. Data Science and AI projects of the past made the mistake of attempting to get everything right from the beginning. The groundwork questions outlined here, along with this approach, enable us to assess the safety and efficacy of these tools in real time, avoiding unnecessary delays caused by prolonged debates.

Institute AI-Specific Effective Safeguards against Harmful Output

AI (Large Language Model or LLM) output is intrinsically connected to the underlying quality of its training data and algorithms. Internet content makes up the bulk of this underlying data that Large Language Models have been trained on. Such data is predominantly in English, and is culturally and epistemologically Western. English makes up the largest language (being at ~50% or higher) of text databases of crawled internet data used for LLM training. As a result LLMs by default, assume anglophonic and Western contexts.

Due to its training data’s provenance, connected to the internet, LLMs are inherently trained on biased, stereotypical, misogynistic, discriminatory and harmful data.  This also flattens descriptions of people from other parts of the world and does not represent diversity, complexity or heterogeneity

Given LLM optimizations for efficiency, performance and cost-savings, these models run the risk of exacerbating above-mentioned tendencies and do not have any internal mechanism to foreground marginalized populations.

In other words, AI technology in general and Generative AI technology specifically have the potential to do harm with their outputs. LLMs themselves have safeguards against bias and harm, but these cannot be relied upon in sensitive humanitarian contexts. 

This is where understanding AI and how it works can help create a body of knowledge and practical safety mechanisms which manage and mitigate harmful outputs. 

Signpost utilized mitigations in the AI Information Assistant development, including embedding a principles document into the Assistant's system prompts and modifying problematic LLM output through quality testing, red-team testing, prompt design and engineering. Specific  safeguards included:

  1. Signpost AI adopts Ethical and Responsible AI principles based on the humanitarian basis of “do no harm” to ensure quality and safety. These values and principles framed all of the decisions around Signpost Information Assistant’s development, testing and evaluation and deployment

  2. Signpost teams had specific frameworks related to Quality (see here) and Red-Team (see here) testing and evaluations respectively, all grounded on the idea that the Generative AI agent technology is  being vetted for humanitarian use-case. These teams used the frameworks to carry out rigorous rapid evaluations to confirm that final outputs were above acceptable thresholds. The goal was to reach human moderator quality in the Assistant’s responses

  3. Creation and curation of a set of system and local prompts which “train” the Information Assistant to be kind, responsive and helpful as well as prohibiting discriminatory and harmful responses

  4. Each output was checked against Constitutional AI rules which acted as double-checks to ensure nothing biased,or harmful was let through. These rules, which directed agent behavior on client safety and protection were based on: (a) on humanitarian values, (b) ethical principles and (c) Signpost human moderator guidelines handbook.

  5. Creation of localized vetted Signpost knowledge bases which foreground all AI generations so what is outputted is contextually specific to questions asked

A responsible humanitarian AI approach requires AI-specific technical guardrails. In the case of Generative AI, this means having specific frameworks and workflows that monitor, test and evaluate its probabilistic outputs. A successful Responsible AI approach asks that you identify issues and problems related to the technology solution, do due diligence on mitigation techniques and implement sustainable humanitarian-specific mechanisms built on such techniques.


Build Foundations: AI Literacy 

In a world where technological advancements are outpacing regulatory and ethical safeguards, the humanitarian sector cannot afford to be a passive observer. Understanding AI is a prerequisite for shaping its trajectory in ways that serve vulnerable, and marginalized communities rather than subjecting them to technologies which could potentially deepen their marginalization. It is, therefore, both a humanitarian responsibility and a moral obligation for organizations to engage with this technology, interrogate its implications, and ensure it is harnessed as a force for good. 

A Responsible Approach to Humanitarian AI, requires strong foundations made up of the very people who work in the sector. Signpost views AI Literacy and associated upskilling benefits as foundational to such an approach. 

Humanitarian organizations must engage with these tools to advocate for their ethical use and ensure they align with humanitarian principles such as humanity, impartiality, and neutrality.

AI is being pushed as a general solution to everything, a substitutionary force, one that takes over jobs and replaces humans.  This makes it vitally important to arm humanitarians with AI and digital-technology related knowledge and skills to (a) understand AI as a technology, its capabilities and limitations (b) interrogate for themselves the suitability of AI in humanitarian use-cases (c) see how AI can used as a complementary tool that augments their own capacities and that of their organizations.

A widespread lack of AI knowledge within an organization can create technical silos and slow down a rigorous, systematic evaluation of the technology. Lack of appropriate knowledge level may also open up unpredictable vulnerabilities in different parts of the data management and technology stacks. 

Consulting experts, reviewing the literature, and reflecting on our own experiences makes it clear that we urgently need AI literacy and the infrastructure to spread it. This literacy is aimed towards arming users, humanitarian staff, organizational leaders and stakeholders with knowledge required to understand how Generative AI works, how it may work in their daily tasks, its larger context and associated risks, benefits and trade-offs. 

This requires investments in increasing basic understanding and explainability of the technology tailored to various roles and parts of the humanitarian system. User communities, humanitarian staff, technical specialists, C-suite officers, etc. all require bespoke basic knowledge needs in terms of understanding of (a) AI in general (b) GenAI specifically and (c) implementations of Gen AI  and how it might affect their own work. In addition to the technical information, AI literacy entails a deep understanding of the socio-politico-economic contexts in which AI originates and operates. This latter part is especially pertinent to decision-makers.

 Organizational AI Literacy requires:

  1. Learning Pathways specific to staff’s proximity to using AI. For example, all staff must be educated on fundamental concepts, terminology, capabilities/limitations, and ethical implications of AI. A smaller staff working with data might require more detailed training with regards to the implications of AI on privacy and protection

  2. Learning Resources. Develop training materials contextualizing AI within humanitarian work. For more savvy minded, have available accessible repositories of AI resources, examples, and tools

  3. AI related Information for users. An easy to understand repository of text, visual and audio information that explains AI, how the organization is utilizing AI, and details information on privacy, data protection and consent

  4. Implement Feedback Mechanisms: Organize regular knowledge-sharing sessions, foster communities of practice, and solicit continuous feedback and assessments on how AI is working or not working

By establishing effective literacy and communication systems, organizations can help employees and leaders to adapt and expand their skills regarding AI-driven processes and tools. 

Assess Risks and Responsibilities Rifts

The global context for Generative AI should be acknowledged by humanitarians deliberating and utilizing this new technology to reach under- and unserved communities. This is particularly pertinent given the rise in humanitarian need and the lack of funding required to meet it.  

There are risks to Generative AI which require mitigation actions. Generative AI  offers cost-effective solutions and scaling opportunities to meet increasing humanitarian information needs in areas of crises. With these benefits, however, come crucial trade-offs which must be negotiated and whose effects must be alleviated.

All of these require mapping and assessment. Such a risk assessment within an humanitarian context should be able to map clients needs and risks, highlight general risks of AI and potential mitigations, and trade-offs associated with AI use. Signpost invested in such an undertaking early, producing a document that explored such questions surrounding Generative AI. 

Research on the topic and an internal risk assessment is imperative for any humanitarian organization thinking responsibly about AI. But that is not the only assessment that must be made.

An AI risk lens draws attention to “objective” AI risks, i.e. over which stakeholders find consensus, i.e. alignment on what risks ought to be minimized and how to minimize them. According to Professor Angela Aristidou and researcher Shivaang Sharma, an AI risk lens revolves around mitigating technical or operational threats that can be preemptively identified and addressed, including training data biases, or unintended AI outputs. They point out that such exercises, while invaluable, are inherently technocentric. 

A more holistic approach requires an AI Responsibility Rifts (AIRR) Lens. It “uncovers rifts - persistent disagreements among stakeholders about the ethicality and societal impact of an AI tool’s design, implementation, and effects. These dissonances occur because stakeholders—such as developers, users, regulators, and affected communities—experience AI differently, shaped by their unique interactions and social contexts.”

These are subjective disagreements amongst stakeholders of an AI tool due to their differing expectations, values and perceptions of impact of an AI tool, i.e. lack of alignment due to minor or major disagreements over aspects of Safety, Accountability, Reliability and equity. A summary of objective risks associated with Risk Assessments and subjective risks associated with Dr. Aristidou and Sharma’s AIRR lens are presented below:

While Signpost has aspects of AIRR in its own risk assessment, it aims to conduct its own AIRR mapping based on SHARE (Safety, Humanity, Accountability, Reliability, Equity) framework in the future. This is to better understand not just stakeholder agreements and disagreements but also the moral quandaries that underpin AI use. 

We encourage organizations to attempt both exercises to obtain visibility over current and future issues that might emerge not just from technical sources but from moral pluralism and sociotechnical divides. 

Develop Guardrails: Data Privacy and Protection

Data has become key in the rapid advancement of artificial intelligence (AI).  AI Systems are built on the currency of data. Where Predictive AI systems have always been data-dependent for their training and development, foundational Generative AI systems have massively increased the volume of data that is required for model training, most of which cannot be opted out of.

This creates significant concerns most of which have been explored in the Literature Review section

The need for a more proactive and robust approach to data protection is evident.  Effective safeguards must assess what AI solutions are used for, how they impact affected people, what data will be collected and what type of data they are relying on – in particular if they involve personal, confidential or sensitive data. To get humanitarian AI right, we need to get data protection right

Accordingly, it is essential that humanitarian organizations create clear guidelines for implementing AI in the humanitarian context specifically in the realm of data privacy and protections. It is also critical for humanitarians to develop, and operationalize strategies for data protection and privacy principles. Protecting fundamental humanitarian principles demands a combination of data protection safeguards, existing legal and regulatory frameworks, and proactive actions to address AI risks.

Based on Signpost’s experiences with developing strategies and guardrails which uphold the highest standards of data privacy, security and protection, we offer the following recommendations:

  1. Do not reinvent the wheel. There is a view that because AI is new (it is not; it has a long history), there is a scarcity of policy tools or frameworks to deal with it. As Devidal points out, this view is misguided because there are plenty of legal norms, principles and practices which apply to AI, including international human rights and humanitarian law, legislation on data protection, privacy and ethics. These tools can be used prior to AI tool launch instead of having to twist them to the workings of AI.

  2. If your AI projects need data in order to personalize responses to users and enhance service provision to them. , think deeply about your approach to approach Data Minimization, Purpose Limitation, Consent and Transparency

  3. Outline clearly and share openly what data you are collecting, how you are securing it, specify the rationale and purpose for data collection (in Signpost’s case, “to develop accurate and more efficient AI information assistant which can minimize user request overload especially during crisis and disasters”), your retention policies as well as how requests for data deletion and right to be forgotten are processed

  4. Enact data policies on data retention on your infrastructural storage and customer service platforms. Additionally, access to private data on all platforms is controlled based on user roles ensuring that access is permission-based and strictly for those who are 

  5. Prioritize transparency by having clear privacy policies and cookie notices on all websites and technology touch points. Ensure that users understand the terms and conditions of interacting with your services.

  6. Data protection impact assessments and risk assessments are necessary to ensure holistic consideration of responsible data concerns

  7. Secure Data during the course of interactions with users. If the service operates on a third-party platform, ensure strict data sharing agreements in place to safeguard user privacy

  8. Utilize team expertise. For Signpost, this meant Red and Quality Teams worked together to safeguard user privacy. The Red Team, for example, identified and mitigated security vulnerabilities, and potential for discrimination during AI interactions. The Quality Team prioritized user well being by ensuring that AI interactions are trauma-informed, client-centered, safe and expectations-managed. 

  9. Implement Privacy considerations into the development or uptake of AI tools. Looking at AI privacy issues from a multifaceted approach, Signpost implemented privacy by design principles:

  • Proactive not Reactive; Preventative not Remedial

  • Privacy as the Default Setting

  • Privacy Embedded into Design

  • Full Functionality – Positive-Sum, not Zero-Sum

  • End-to-End Security – Full Lifecycle Protection

  • Visibility and Transparency – Keep it Open

  • Respect for User Privacy – Keep it User-Centric

Design for Inclusion and Accessibility

In developing AI tools for the humanitarian sector, inclusion and accessibility are fundamental to ensuring that the tool benefits all. Failing to account for diverse user needs, such as individuals with disabilities, speakers of minority languages, or those with limited digital literacy, risks violating privacy, perpetuating exclusion, and widening existing inequalities. 

Signpost prioritized inclusive design principles, such as humanitarian engineering, privacy by design and explored participatory approaches such as Ethics by Design to address such issues. 

Accessibility ensures that AI tools are usable by people with varying abilities, fostering empowerment and independence.  Signpost has tried to make its AI tools accessible by design. We see this as a moral imperative, aligning our technology with the principles of equity and human rights.

Inclusion

Expert stakeholders and the literature highlight the need to work with local communities and research the diverse needs of the people organizations serve prior to the decision to design/deploy AI solutions. A participatory user-centered design is central to making AI tools and their deployments as useful as possible to the communities being served. 

This is a difficult task, as Signpost itself has been unable to execute on this approach. While Signpost did assess the needs for an Information Assistant, there were limited resources to facilitate a co-design with its communities. This is one chief general limitation of such approaches, the others being time and varying levels of technological understanding in communities which affects quality of participation.

That said, true local ownership of AI tools can only occur with close collaboration on design and deployment of such tools. In the long term, this is the only sustainable equitable way  forward. We recommend both for ourselves and the sector the establishment of long-term participatory processes with our communities to ensure their values and needs are encoded into AI systems. This can include co-developing datasets or co-crafting system and local prompts for LLM-based AI systems. Such inclusive equity initiatives require balancing local specificity with scalable solutions, and avoiding trade-offs that sacrifice inclusivity for efficiency.

In absence of community co-design, there are still steps that can be taken to improve inclusion as Signposthas taken. This includes leveraging expertise of on-the-grounds staff, as well ensuring that language and cultural contextualisation are central to the operation of the AI tool.

Accessibility

Inequality and differences in access to the internet and digital technologies fundamentally inform how AI tools are deployed as well as how they are designed. One issue is that vulnerable people may find it difficult to use AI tool to seek services due to a reliance on shared mobile phones or SIM cards within large communities. 

Furthermore, studies have shown previous AI chatbots have not been found to be useful in areas with low internet access, low smartphone saturation, lower literacy and older populations . In order to solve these accessibility issues, intentional design is key. Signpost has designed for such issues and is testing low-bandwidth solutions thinking about whether in low-literacy environments, multimodal (voice or audio) approaches might be better suited. 

We advocate for this intentionality in AI tool design. We strongly recommend studying accessibility requirements for different populations and examining digital literacy variations across affected populations to ensure inclusive and accessible technology design and deployment.

Be Transparent; Explain the Technology

Transparency and “explainability” are essential elements of a safe, ethical and responsible approach to AI. While related, these concepts should be looked at separately. 

According to the OECD, AI transparency entails responsible disclosure about AI tools and systems as well as providing meaningful information:

  1. Inform stakeholders of their interactions with AI systems within programs and in the workplace

  2. to provide information that enable those adversely affected by an AI system to challenge its output

Transparency enhances trust, accountability with clients, stakeholders, donors and society at large, fosters collaboration with humanitarian and tech partners, and serves as an accountability yardstick for evaluative purposes.

Explainability, on the other hand, focuses on the ability to provide clear and meaningful explanations for the outputs, predictions, or behaviors of a system. Explainable systems aim to offer insights into how and why they arrived at particular results; this empowers individuals to comprehend the rationale behind the system's actions and decisions. Explainability communicates the inner-workings, reasoning and justifications for complex AI systems, making them more open, and offering insight into how organizations can think about algorithmic processing, outputs, which data is being used and how.

Explainability has become important for Generative AI in particular, given that most Generative AI systems operate as “black boxes” which means significant parts of how answers are generated is hidden from everyone, including experts. Explainability in the age of Generative AI means:

  1. Providing simple, and easy to understand information on sources of data/inputs, processes (where feasible) and 

  2. Fostering general understanding of AI systems, including their capabilities and limitations

Benefits of AI explainability in the humanitarian context include improvements in regulatory and ethical technical compliance, and the ability for organizations to correct red-flags and biased, discriminatory, sexist outputs.

You can read about transparency and explainability, their benefits and of the issues that might arise in their absence in our in-depth report here.

Signpost’s AI transparency and explainability efforts were built on the existing standards, and modes of reporting. These efforts are bulleted here:

  1. Publishing AI Ethical Principles as well as explaining the decision-making behind them

  2. Open Publishing development updates, research as it took place, best practices as Signpost learned from them and the limitations and setbacks it faced during the development process

  3. Explaining Evaluation/Safeguarding Processes, and Workflows related to AI Safety and Quality Testing

  4. Explaining Data Privacy & Protection Efforts

  5. Open-sourcing AI Code and releasing technical documentation

  6. Being Transparent about Technology and Research Partnerships and Open Collaborations

Based on our experiences, expert feedback and literature surveys on the subject, we highly recommend the following:

  1. Share information about your reflections and experiences with AI; what worked and what did not work? Why?

  2. Use clear language, granular choices, and easy opt-out processes

  3. Be open about how you are using AI tools and algorithmic systems. Explain clearly:

    • How is AI being used in in each context, the potential risks, and impact of its tools on people

    • What do you deem safe, secure and safe? Why?

  4. Make sure clear, straightforward information about how AI systems are used is provided to everyone who may be potentially affected. This includes why and when AI is being used, how it operates and how outputs are being generated

  5. Based on contextual AI use, implement clear tracking systems, share information and notify relevant parties about AI use in organizational activities and decisions. Make AI systems publicly available as open-source where suitable

  6. Provide a mapping of how AI systems are evaluated, audited, and assessed

  7. Demonstrate the compliance of your AI systems with regulations such as EU’s AI Act

Collaborate: AI as a Public Good

Signpost works in sensitive contexts where decision-making needs to be based on reliable, accurate and up-to-date information. Inability to hold this goal can have catastrophic impacts on their communities. 

This is why Signpost principles and values position their AI offerings as Digital Public Goods, whose workings and outputs are transparent, widely available and subject to external tests of technical reliability.

Signpost has an open invitation to sector and non-sector partners to review its model, frameworks, data policies, legal compliance, impact assessments and trace the information-workflow of our AI products for quality and ethical assurance.

If the humanitarian sector is to leverage the power of AI, it must understand AI as a digital public good, where the sector’s contributions towards it must not only be shared and open, but transparent and inviting towards deeper cooperation and collaboration.

For these reasons, we recommend:

  1. Make public the design, performance and outputs of humanitarian AI models

  2. Offer your tools to everyone. Make them open-sourced and provide documentation for others to be able to use it effectively in their use-cases

  3. Reach out to others; combine diverse sets of expertise to manage AI. Foster multilateral partnerships between AI developers, UN agencies, and nonprofits

  4. Use partnerships to define problem statements and find solutions that can be translated across different organizations 

Conclusion

Signpost has the responsibility of providing crucial, potentially life-saving information services to the world’s most vulnerable populations. Signpost’s interest in exploring GenAI implementations in the humanitarian sector is not predicated on FOMO or adoption of the newest technological fad but in opening the conversation on not only if, but  also how we can ethically and responsibly meet the needs of our communities at scale through this technology.

If your organization’s identified problem includes Generative AI tools as a solution, the question of how arises. This how of AI is a hard question because it requires a meticulous, rigorous approach to AI governance, development, evaluation and deployment built on humanitarian principles and values. 

It is only if we have evidence-based clarity on the “how” that we can approach the “if” question in good faith and ultimately seek resolution to the humanitarian sector’s moral responsibility towards a technology that can potentially scale to help many more people. 

How can an aid agency uphold humanitarian values throughout its development and deployment of  AI tools? The guidelines offered here are from our learnings in trying to answer this how of AI by taking on a practical AI thinking approach in developing the Signpost AI Information Assistant.

Signpost’s position is that AI, once stripped of the deafening hyped discourse surrounding it, i.e. boosterism or doomerism, can make a tangible positive difference in the lives of those that it serves. 

The approach to AI matters. 

First, it requires a careful and specific articulation of your problem, whether AI can provide a solution. Second, it requires assessing AI use-cases, instituting AI and data governance safeguards, mapping potential impacts on localization, and prioritizing accountability to your communities. Third, it is important to give all stakeholders the time and knowledge to know what they are interacting with.

Generative AI, unlike prior technologies, widens the scope of potential risks given its long informational supply chain. An ethical and responsible approach to AI is therefore, a balance between the benefits the technology offers and the trade-offs that it necessitates. None of these trade-offs should be at the expense of humanitarian principles and the communities that the aid sector 

Looking forward, there needs to be a sustained effort at developing mechanisms for eliciting productive and actionable community feedback in the technology that they will come face to face with and which will have direct impact on them. The development and deployment of AI tools requires a team. This team includes the community as an equal partner.

Humanitarian AI responsibility need not be mere check marks or lists of ethical to-dos, devoid of material action; they must be operationalized throughout the development process, from the bottom up. 

Given continued financial pressures on the aid sector, and the increasing capabilities of applied AI, Signpost is exploring ways to reduce risk and increase efficiency of its AI Information tools and develop capabilities for other organizations’ to leverage Signpost’s publicly available learnings and tools for their program goals. 

Building upon the experiences and findings of the pilot,  Signpost is looking to develop capacity to support AI orchestration, an AI modality to improve quality while reducing risks in complex task workflows involving educational information delivery. To this end, Signpost is working on improvements to its own knowledge base curation and alongside, creating deterministic LLM-based AI tools to produce predictable, high-quality, and controllable outputs with stronger risk reduction.

As the efforts of Signpost and those of other actors highlight, leveraging AI responsibly, safely and effectively in the humanitarian sector is an ongoing pursuit. We are not there yet but there is no reason why we cannot eventually get there. 

Next
Next

Signpost AI Information Assistant: A Pilot Case Study