Hey all, I know a lot of you are probably starting to think about what the AI policy should be for your nonprofit. So, I went ahead and create a policy template that might help. This AI policy assumes that people will use AI for all kinds of things, but focuses on maintaining your team members as the experts while using AI to supplement and support.
Please see the suggested policy template below. NOTE: this is not a legal document and I am now a lawyer. :)
AI Usage Policy for [Your Nonprofit]
Purpose
At [Nonprofit Name], we believe AI can help us work smarter, faster, and more creatively. AI tools are here to supplement our work, not replace it. Our expertise, judgment, and voice are what make our work meaningful — AI simply helps us get there more efficiently.
Guiding Principles
Humans Lead, AI Supports
Staff are expected to provide the critical thinking, strategy, and expertise.
AI can help with brainstorming, drafting, research, and execution — but final decisions, accuracy, and voice always come from us.
Always Review & Refine
All AI-generated content must be fact-checked, edited, and aligned with our nonprofit’s brand, values, and tone of voice before being shared externally.
Staff, not AI, are accountable for the accuracy and integrity of any final product.
Protect Privacy & Confidentiality
Never input donor data, client information, financial records, or other personally identifiable or confidential information into AI tools unless expressly authorized.
Protect intellectual property — don’t upload internal or proprietary documents into AI systems without approval.
Acceptable Uses
Idea generation, outlines, and brainstorming.
Drafting text for blogs, emails, social posts, or reports (with review).
Research support (with fact-checking).
Content repurposing (summaries, captions, etc.).
Administrative support (task lists, templates, formatting).
Prohibited Uses
Using AI to produce final, unedited content.
Passing off AI work as original research or lived experience.
Uploading sensitive donor, client, or staff data without approval.
Relying on AI for strategic or ethical decisions.
Governance & Accountability
Each staff member is responsible for the accuracy and quality of their work, regardless of how much AI was used in creating it.
An internal AI Lead will serve as a resource for staff, track tool updates, and provide training to ensure safe and effective use.
Everyone should be using AI in some capacity to accelerate their work — but always within these guardrails.
Our Philosophy
AI makes us faster and more creative, but it doesn’t replace the heart of our work. The impact we have comes from our people — their judgment, relationships, and passion. AI is simply a tool to help us do more good, more efficiently.
Policy Quick Reference Guide:
Using AI at [Your Nonprofit]
AI is here to help us — not replace us.
Use it to work faster, spark ideas, and streamline tasks. Final decisions and quality always come from you.
DO ✔️
Use AI for brainstorming, outlines, and drafts.
Let AI help with summaries, captions, repurposing content.
Fact-check everything AI gives you.
Edit/rewrite so it matches our voice and tone.
Use AI to save time and increase creativity.
Ask our AI Lead if you’re unsure about a tool or best practice.
DON’T ❌
Don’t share donor, client, or personal data with AI tools.
Don’t upload confidential or proprietary docs unless approved.
Don’t publish or send unedited AI content.
Don’t rely on AI for final answers, research, or strategy.
Don’t let AI replace your expertise, judgment, or relationships.
Remember
Humans lead, AI supports.
You are responsible for accuracy and integrity.
Everyone uses AI in some way to move faster and do more good.