I cracked a joke in an email to a longtime volunteer. It bombed. Badly.
The volunteer had asked for their hours tally with some peculiar grammar in their email. I pulled the data, sent it over, and added what I thought was a playful line: “I also had AI compose this email to you ;-).”
The reply came back polite but pointed: “I have no idea what you mean, perhaps my writing style is a joke to you.”
Oof.
I responded immediately with a genuine apology, explaining my intent wasn’t to diminish their value but to make light of the mundane data work. But the damage was done, and the lesson was clear: in preservation work, where volunteers are the lifeblood and respect is currency, even offhand tech references can land like insults.
This wasn’t just about my poor word choice. It revealed something bigger about how AI language intersects with heritage work, volunteer culture, and organizational trust. Here’s what I learned, what the research shows, and what we can all take away.
Why This Hit Different
Volunteers in preservation organizations aren’t just logging hours. They’re memory-keepers, storytellers, and the connective tissue between past and present. When you work with dusty rail cars, locomotives, and artifacts that require hands-on expertise, there’s an inherent tension when digital tools enter the picture.
My joke about AI “composing the email” of the volunteer’s hours accidentally suggested their contribution was reducible to a data point. What I meant as “hey, I ai too” read as “a robot could do what you do.”
In heritage work, that’s not just tone-deaf. It’s existential. These volunteers have specialized knowledge that no algorithm possesses. They can identify a brake valve by touch or explain why a particular restoration technique matters for historical accuracy.
When you joke that AI handled their requests, you’re unintentionally saying: “Your work is fungible. Your expertise is optional.”
No wonder it stung.
What the Research Reveals
This experience sent me down a research rabbit hole. Turns out, nonprofits and heritage organizations are wrestling with AI adoption in ways that amplify these tensions:
Resource and readiness gaps are real. Most nonprofits lack the infrastructure, funding, and strategic clarity to implement AI effectively. A 2024 study by the Center for Effective Philanthropy found only 11% of funders provide support for nonprofits to adopt AI tools. We’re expected to modernize without the means to do it thoughtfully.
Transparency matters enormously to stakeholders. A donor sentiment study surveying 1,006 people found that 86% said transparency around AI use is very or somewhat important. While 82% were familiar with AI and 48% saw fraud detection as a benefit, the message was clear: tell us what you’re doing with these tools.
Adoption is happening fast, but governance lags behind. A UK study of grassroots nonprofits found 78% were already using generative AI tools by mid-2024. However, they flagged major concerns about readiness, oversight, and ethics. We’re implementing before we’re prepared.
Heritage work faces unique ethical challenges. In the cultural sector, AI is being used to digitize manuscripts, reconstruct artifacts, translate endangered languages, and create digital twins of monuments. But these applications raise critical questions: Who owns the digital twin? Who decides what gets preserved or prioritized? Whose voices are centered or erased?
The bottom line: AI has genuine promise for efficiency and capability. But in mission-driven, community-centered work like ours, the “how” and the “who” matter as much as the “what.”
Five Takeaways for Heritage and Preservation Work
Let me get practical. Here’s what my email mishap and the broader research landscape taught me about using AI responsibly in heritage organizations:
1. Position tech as supporting humans, not replacing them
Your volunteers aren’t interchangeable parts. They’re the reason your organization exists. When you mention AI or automation, frame it clearly: “This tool helps me with the administrative grunt work so I have more time to work alongside you, hear your stories, and support the hands-on preservation you do.”
Never let technology language accidentally erase the human contribution. Make the hierarchy explicit: people first, tools second.
2. Be radically transparent about what AI does
If you’re using AI for donor segmentation, volunteer scheduling, or data analysis, say so. Explain what the tool does, what it doesn’t do, and how you maintain oversight.
This isn’t just good practice. It’s what your stakeholders want. Research shows that transparency builds trust, while opacity breeds suspicion. When people understand your tools serve them rather than surveil them, they’re far more receptive.
3. Don’t confuse efficiency with relationship
Yes, 71% of nonprofits using generative AI cite efficiency as the primary benefit. But if you automate everything, you lose what makes preservation work meaningful: personal recognition, authentic relationships, hands-on collaboration, and the stories that can’t be captured in a database.
My joke misfired because it inadvertently suggested efficiency mattered more than the person. In heritage work, that’s exactly backward. The inefficient parts (the conversations, the shared work, the mutual respect) are often the most valuable.
4. Build governance before you build systems
Before implementing AI for volunteer tracking, donor outreach, or collection management, ask hard questions:
- Is our data quality good enough to avoid biased outputs?
- Have we considered how volunteers will perceive this?
- What are the ethical implications of using personal data this way?
- Who makes decisions about what gets automated and what stays human?
Heritage organizations steward more than objects. We steward stories, relationships, and community trust. Even small tech decisions have legacy consequences. A digital catalog that erases a marginalized community’s contribution isn’t just bad data. It’s bad history.
5. Craft your messaging with care
This is where I failed. A throwaway line about “AI composing” the email landed as dismissive because I didn’t consider my audience: long-tenured volunteers in a preservation field where hands-on expertise and legacy matter deeply.
When you discuss AI with your community, try language like: “I’m using a tool that handles the spreadsheet work so I can focus on what matters, supporting you and the preservation work we do together. The tool serves us, not the other way around.”
Align the technology with service and respect. Make clear that automation exists to honor human contribution, not minimize it.
The Bottom Line
I’m grateful for this stumble. That careless line about AI taught me how powerful (and sometimes invisible) our tech language can be, even in a small railroad historical society.
We’re people. We’re memory-keepers. We’re community. Yes, we can and should use cutting-edge tools (AI, donor databases, digitized archives) to make our work more effective. But only if those tools genuinely serve the people doing the heavy lifting.
If your nonprofit, heritage group, or preservation organization is exploring AI, go ahead. But do it with humility, transparency, and deep respect for the volunteers, staff, and community members who make your mission possible.
Because the day your AI joke lands better than your heartfelt “thank you for giving 350 hours this year,” you’ve got things backward. And trust me, I learned that lesson the hard way.
The AI Joke That Backfired: What Heritage Organizations Need to Know About Tech and Trust


Leave a Reply