Opinion: AI is driving impact and equity at New York nonprofits

It’s time to look past the hype and embrace the technology

TEK IMAGE/SCIENCE PHOTO LIBRARY - Getty

When I speak with nonprofit leaders about artificial intelligence tools such as ChatGPT, Microsoft Copilot or Google Gemini, I’m amazed by the wide range of responses: from Luddite resistance to ecstatic enthusiasm. Neither of those approaches is right – these are such new technologies that we simply don’t know enough yet.

But I’ll admit I was curious about all the talk and so I went down the AI rabbit-hole on my nights and weekends. First I spent 200+ hours using these tools, getting my certificate from Vanderbilt on Prompt Engineering and taking an Ethics in AI class. Then I worked with a dozen nonprofits to run a survey on AI usage among 530 New York City nonprofit workers. I can now report two things very confidently:

  1. There is an extremely strong case for every single nonprofit to implement safe, modest, incremental AI tools that drive impact and equity, today.
  2. If you’re wondering whether to allow staff to use ChatGPT or other tools, the point is moot: they already are.

Inappropriate enthusiasm for AI is easy to track. Some people just genuinely love all things tech - new phones and databases and coding and IT. But this category of people can also be susceptible to snake-oil salesmen, crypto-bros, and meme stock advocates.

Some of the more resistant leaders have heard concerning reports about bias, inaccuracy, or job replacement and are being cautious. And some are reacting logically to the early phase of the technology hype cycle: A new technology is vastly oversold (“AI will fix everything!”), then there is an overreaction (“It’s all a scam!), and finally a resolution in the middle (“Hey, this is actually pretty useful.”).

The key to moving beyond these extremes is to differentiate between what I’ve come to see as Exponential AI and Incremental AI.

“Exponential AI” represents the big “change everything” projects like self-driving cars, cancer drug discovery, or a robot suicide hotline. Most of these big ideas aren’t actually possible right now. When they are, they require experts, large data sets, and building new tools. The tech enthusiasts love this stuff. And big companies would love to sell us this stuff. But big projects can be expensive and don’t always deliver the promised results. And AI-specific concerns around accuracy, bias, and job replacement are most significant with big projects that want access to all our most confidential data.

“Incremental AI” has more modest goals, vastly less risk, and is capable of being immediately useful. Its goal is to use low-cost, off-the-shelf products such as ChatGPT or Copilot to help our colleagues eliminate the least favorite elements of their jobs and get more done in service of clients and mission. Implementing Incremental AI is more like ensuring our staff have access to Microsoft Excel or Google Docs. Some staff will become sophisticated users, others more elementary - but it’s helpful to all. 

The case for the thoughtful use of Incremental AI in every nonprofit is straightforward. 

  • It can make most staff better at their job and take away many boring tasks.
  • It seems to drive equity – as early research shows that AI tools are more beneficial to those workers with less experience and formal education.
  • It should further drive equity by preventing our next digital divide – as research is also showing that usage of these tools has been concentrated on young white men.
  • If your nonprofit does ever want to consider Exponential AI, such projects will benefit from having existing expertise in a broad range of current staff. 

With an Incremental rather than Exponential approach to AI it becomes cheaper and easier to mitigate the most significant concerns with AI. Imagine allowing our colleagues to get help by asking:

  • “AI, please write a letter of medical necessity for a malnourished 10-year-old boy. Keep it anonymous to serve HIPAA and I’ll fill in details later. Read this 200 page PDF from the government so you know exactly how they like it written.”
  • “AI, read this 140 page dissertation my colleague wrote on workforce development. Write the full outline for a 60 minute donor webinar and then draft a series of 5 social media posts to promote the webinar.” 
  • “AI, I’ve never done any program design in 7 years working with kids, but read these other 5 program proposals and then let me dictate my big ideas that you will turn into a well structured proposal I can share with my boss.” 

These examples all use non-confidential but specific information to help AI tools do good work, quickly and accurately. And they reduce risk of bias by asking AI to work off our own materials - not random things it might find on the internet.

And while the Exponential versus Incremental divide is fine to talk about in theory, the reality is that a very large number of nonprofit staff are already taking the Incremental approach. At the moment, this is good news and bad news.

In our recent survey of 530 nonprofit staff working at all levels of a dozen New York City organizations, 48% are already using one of the new AI tools - and they’ve found some pretty terrific uses. They are speeding up grant-writing, summarizing long documents, and creating HR role playing scenarios for new managers. And they are doing these things in ways that make specific sense in the context of their organizations.

That’s the good news.

The bad news is that most New York nonprofits haven’t yet created a basic use policy for AI. So some employees might be entering confidential client information they shouldn’t. And too many of these employees are acting as “secret cyborgs” (a term coined by University of Pennsylvania Professor Ethan Mollick).  Because they’re not sure if using AI will be praised or criticized by colleagues and managers, they are keeping everything they learn to themselves - limiting the power of AI to make a real difference across the organization.

It’s clear what New York’s nonprofits need to do next.

First, every organization needs to quickly create an interim AI use policy, setting clear boundaries on what’s acceptable, while encouraging experimentation within those bounds. The AI genie is out of the bottle – we need placeholder policies right away. 

Such a policy could be as simple as a staff wide email saying: “AI use and experimentation is encouraged. Never enter private client data. You are responsible for the quality of your work, even if AI helps. All of our other policies about data privacy and presenting accurate work apply here, too. If you share any AI-supported work internally you must disclose it - but if it is good you will be praised, not criticized. Don’t share anything AI-created externally if it would be embarrassing if you had to disclose that AI helped you.”

Second, all nonprofits must level the playing field by offering 1-2 hours of basic training to help expose all staff of all backgrounds to the reality that these AI tools are remarkably easy to use and not intimidating, if the correct incremental approach is brought to bear. AI can be a tool that drives equity, but only if we act fast to make sure its use is evenly and fairly distributed.

Third, every workplace needs a peer-learning strategy so that the secret cyborgs at every level of every department become advocates and educators for their colleagues. They are already doing the work. They know your organization better than any outside consultant. We should learn from them.

Two years ago I wrote an opinion piece on this site that launched the PSLF.nyc Campaign, an effort that ultimately helped 70,000 New Yorkers move toward $4.6 Billion in student loan forgiveness. I am as excited about the potential of new AI tools to help nonprofit workers as I was about that project. 

Nonprofit leaders don’t need to be techo-enthusiasts. But an incremental approach to AI, building on the work that our colleagues are already doing, will help our colleagues do even more of what they do best - improving the lives of our fellow New Yorkers.

Rich Leimsider is Entrepreneur-In-Residence at the Fund for the City of New York.