Illustration by Neema Iyer

For most of my career, I’ve worked in a fundraising and business development capacity, in addition to my primary role. From the early days at Text to Change then as Regional Director at VOTO Mobile and more recently as the Executive Director at Pollicy. Countless hours poring over words, creating budgets that are a balancing act between ambition and reality, going to pitch meetings and competitions. Sometimes, bleary eyed at the laptop, realising it was actually character count and not word count, and you now have to delete hundreds of words you just wrote and bring it down to a tiny paragraph that couldn’t possibly get your point across. There is this constant dance of aligning our vision with the diverse requirements of different grants.

But, it’s 2023, and things have dramatically changed. With AI writing assistants, tasks that would have taken me hours take me minutes. Tedious, boring things like expanding or shrinking or tailoring your text to different requirements. It feels like if I had these tools when I first started, I would have gone much farther, much faster - dedicating more brainspace to activities that are fun and productive like research and program implementation. As Pollicy grew, the time I had to do the things I felt most passionate about and loved shrunk further and further. So question #1, is it ethical to use AI to support content writing when it comes to fundraising? 

AI in Job, Fellowship and Grant Applications

I now have the opportunity to be on the other side of grantmaking - reading jobs, fellowship and grant applications and proposals, both for Pollicy and for partner organisations where I’ve participated in programs or within my role as an advisor. 

However, as someone who myself uses ChatGPT, it’s very easy to spot when it's been used lazily in applications. From the bullet point structuring to the robotic language and the general vagueness of ideas - and most importantly, it’s all so terribly boring to read. The worst are those who can’t even be bothered to proof read, straight copy and paste like the example below. We received this response from an applicant in our fellowship applications in early 2023.

A response we received from an applicant in our 2023 Call for Fellows

Undeniably, AI is transforming the entire fundraising world and demanding we re-evaluate practices. Broader AI tools like ChatGPT or more niche ones like Grantable and Grant Assistant are making it easier than ever to pump out high-quality proposals. ChatGPT currently has close to 200 million users globally!

Is AI the great equaliser for fundraising?

AI in proposal writing is democratising the playing field. No longer do nonprofits have to rely heavily on professional grant writers or the opportunities to network at foreign conferences, often out of reach for many. It’s saving countless hours that staff would spend writing and re-writing the problem and solutions. No more arbitrary rejection of sound ideas based on grammatical errors or typos, especially by non-native speakers, or other factors that disadvantages historically marginalised groups. Most importantly, AI tools allow the redirection of our collective energies towards more impactful and meaningful work - the type of work that makes us come alive and in the long run, reduces burn out from bureaucratic violence. 

Unlikely.

This all brings us to the crux of this piece, how will funders now respond to AI-generated grant proposals? Without a doubt, there will be an increase in the number of proposals that funders receive from applicants, now that the barrier to entry has considerably dropped. Funders are likely to see a surge in the number of applications. This requires a rethinking of evaluation strategies to manage the increased volume.

  1. Detection (and Rejection) of AI use: There are already a number of tools on the market that can detect as they call it AI Plagiarism. Quick Google Search shows me Copyleaks, ZeroGPT, Originality.ai which have claims such as “can tell the difference between human-written and AI-written text, and it does so with 99% accuracy for ChatGPT, GPT-4, and Bard”. Funders could go the route of saying “NO AI!” and can use these different software to flag and throw out applications. Some may feel that this isn’t particularly fair but also raises the question - where do we draw the line? Is some use ok, for example in brainstorming or changing the length of text versus copying entire sections? Is it simply a matter of developing better “prompt engineering” skills?
     
  2. Voluntary Disclosure (and Acceptance) of AI use: I recently had a conversation with a friend who works in an international grantmaking organisation and they were having an internal discussion on how to address AI use. One suggestion was to have a simple checkbox at the end for disclosure of the use of AI in preparing the grant, however, there would be no penalty for this response. AI is simply a tool, and it would be similar to penalising someone for using a laptop or a pen to put together their proposal. 
    Should there be a requirement or an expectation for nonprofits to disclose the use of AI in their proposals? If so, how should this information influence the evaluation process?
     
  3. Contextual or Personalised Questions: Grant application could (and some do) ask more reflective questions that require a deep understanding of personal experiences or context that AI tools wouldn’t have access to such as “Describe your leadership journey over the past 3 years”, or “What was the most impactful moment for your organisation in the past year”. This would then judge proposals on the ability of the team to work within a general space rather than on specific project ideas, and perhaps grant more freedom to these organisations through unrestricted funding. 🕯️🕯️🕯️  
     
  4. Using AI for AI: There is also the scenario where AI tools are being used to review applications from grantees or even peer review, which could ultimately lead to the omission of humans from the entire grantmaking process! 

    There has been resistance from some funding institutes that have expressed reservations about using AI in the peer review process. These institutes are concerned about the implications of AI on the quality and integrity of the review process. There are valid apprehensions about the confidentiality of proposals submitted to AI systems for reviews. The use of AI raises questions about the security of intellectual property and the potential for penalising original ideas, as these systems could inadvertently leak or misuse sensitive information or feed new ideas into databases without the authors’ consent. One of the more significant concerns is the potential for AI systems to reinforce existing biases. If an AI-generated proposal is reviewed by an AI-assessment platform, there's a risk that the system might favour content that aligns with its own programming, potentially disadvantaging marginalised groups or underrepresented perspectives.
     

  5. The Lottery System: If a majority of applications at some point become AI-generated, would it then make sense to just impose a lottery system once the applications pass a basic eligibility assessment? If we are to focus squarely on “fairness” within grantmaking, this could randomise selections, ensuring fairness in a landscape where AI levels the writing field. Many funders fund what or who they already know, and this could open the door for new opportunities for new partners.
     
  6. The Universal Grant Proposal: Inspired by Nonprofit AF, funders could consider advocating for or accepting a universal grant proposal format, i.e. all nonprofits write one basic (or grand) grant proposal that funders shop around and fund. That means no tailoring the same grant proposal to similar but different questions from each funder. This would streamline the process for both applicants and evaluators.
     
  7. Video Proposals and Interviews: As an alternative to written applications, funders might, and many are already, accepting video submissions or conducting in-person or video call interviews post-review stage. This approach adds a human element to the evaluation process, though I’m sure we’re just a few years (months?) from our AI Avatars speaking on our behalf.
  8. Ditch Grant Applications!: We could just move away from traditional grant writing as the sole metric of merit and funders could engage more directly with communities and undertake their own/at their cost thorough research to understand the impact and effectiveness of applicants in their communities. This is the approach of philanthropists like McKenzie Scott. In late 2022, Scott announced that her donations since 2019 have totaled more than $14 billion and helped fund around 1,600 nonprofits. Her giving is across a wide spectrum of causes, without a grant application process and provides unrestricted funding. You can read more about the approach here.

    Fundi Bots, based out of Uganda, were recently awarded a grant through McKenzie Scott and honestly, without a doubt, that’s an amazing choice. The impact of Fundi Bots is clearly evident and they are truly deserving of funding to continue their great work.

    One issue with funding where funders seek out who to fund is that newer organisations or even quieter/local initiatives without a social media presence may remain invisible to these processes. At Pollicy, we saw this first-hand with organisations working on issues around feminism and religion, where visibility was a very real threat to organisations working in this space.

  9. Participatory Grantmaking: Not a new concept, though could be costly and time-consuming for grantmakers. It involves the communities who are meant to benefit from the work of nonprofits in the decision-making process so as to add layers of transparency and accountability. This approach ensures that the funding addresses issues that are genuinely important to the communities. That means organisations can’t just, for example, helicopter in, extract data and information and then never return back to support those communities.
     
  10. Creative Leadership and Practice: Non-profits might need a way to stand out from the crowd by using creative approaches to their work through different forms of media, partnerships and engagement. If most AI content is a regurgitation of what already exists, then developing ideas that are truly novel and valuable is a way to cut through the noise. Simply, re-hashing what ChatGPT spits out with images created in Midjourney could mean drowning in the sea of AI-generated content that may not advance causes. I’ve written more extensively about a framework for Creative Leadership which you can find here.

Where does that leave us?

I’m not sure. I have lots more to say on fundraising, power dynamics, equity, fairness etc., but here, I’ll simply focus on AI in philanthropy and development aid.

Amara’s law, paraphrasing Roy Amara, states that “people overestimate the impact of technology in the short run and underestimate it in the long run”. As almost every sector continues to grapple with the implications of AI, it will be increasingly important to have an informed and balanced approach to how we deal with it when it comes to governments, philanthropy, aid and nonprofits serving vulnerable and marginalised communities!

Illustration by @king_kinya

Add new comment

Plain text

  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <br><p>