The Wenner-Gren Foundation Generative AI Policy

The Wenner-Gren Foundation Board, Advisory Council, and staff collaborated to develop this policy in the fall of 2024.

March 19, 2025

 Preamble

Our mission guides us in everything we do. The Foundation advances anthropological knowledge, amplifies the impact of anthropology, addresses the precarity of anthropology and anthropologists, and fosters an inclusive vision of the field. Generative AI has the potential to provide powerful tools for achieving these goals. But used unwisely, it could also hamper us in our efforts to serve those who depend on the Foundation for support. Generative AI is a novel tool and profoundly complicated technology. Many are still grappling with how it is dramatically impacting the environment, labor, intellectual property, privacy, pedagogy, and far more.

For the Foundation, Generative AI raises a distinct set of concerns. The soul of anthropology arguably lies in our discipline’s ability to draw broad insights from singular experiences: of a site, of a species, of a group of people with their own distinct histories and dreams. Anthropologists bring their unique backgrounds as scholars and citizens of the world to bear in their research; this is what allows them to produce knowledge that is truly new. This kind of knowledge production takes time and effort: the process is as important as the product. When we write, we identify with our subjects and our readers, cultivating habits of empathy. When we write, we develop a more nuanced and precise understanding of the problem we’ve set out to explore. Generative AI has the potential to short-cut this process when used to replace creative and critical thinking rather than clear time and space for it.

We understand that a blanket prohibition is unrealistic, not only because it would be unenforceable but also because Generative AI is now being integrated into so many digital technologies that it cannot be easily isolated. Instead, we hope that the Foundation staff, leadership, and wider Foundation community—applicants, grantees, reviewers, editors, contributors, and other stakeholders—will approach Generative AI with caution, curiosity, and intentionality.

Therefore, this statement is intended as a set of guardrails to protect us as we work to tap the potential of Generative AI while reducing its harms. The following policy seeks to hold the wider Foundation community to the same standards as we hold our leadership and staff.

Generative AI is a rapidly developing technology. As Generative AI evolves and our knowledge of it grows, this policy will be reviewed and updated.

Our Guiding Principles

Transparency:

  • The advancement of knowledge depends on the underlying methods and ethics of its production. In this spirit, the entire Foundation community should strive for openness about its use of Generative AI, so that we may critically evaluate its impact on the Foundation’s people, programs, and mission.

Pedagogy:

  • We must understand that Generative AI cannot replace original, innovative, and creative thinking, especially when it comes to discovering unexpected connections and making sense of how a complicated world works.

Safety:

  • We never want to use Generative AI in a way that puts scholars and the communities that make their research possible at risk. This means we must be vigilant in how we treat data and information, both within the Foundation’s workings and the larger field. Any consideration of Generative AI’s use must prioritize people’s safety.

Our Policy

Foundation Staff:

  • Generative AI is acceptable for routine tasks, analyses, presentations, and the like, as long as a supervisor has granted written permission and the project does not require the use of any proprietary or confidential information, proposals, or other unpublished or nonpublic information.

Staff are the stewards of the Foundation’s resources and information. They have a primary responsibility to protect confidential information and the intellectual property of the Foundation community. Where Generative AI may improve the Foundation’s work and advance its goals, staff must receive their supervisor’s permission and preferably use a paid subscription. Consideration of Generative AI’s benefits should be weighed against all costs, such as the threats it may pose to the environment and labor rights. Should the Foundation develop its own bespoke Generative AI applications, walled off from public access, we will ask staff members to commit to the safe and ethical use of these tools. Adhering to this policy is a condition of employment.

Applicants and Grantees:

  • Proposers are strongly encouraged to disclose whether and how they used Generative AI to develop their proposal.

We urge applicants to be transparent in their use of Generative AI. In a confidential section of the proposal, which is not visible to reviewers, they will have an opportunity to describe the tool(s) they used, the prompts and resources they uploaded, and how they checked the results for accuracy and originality. We will use this information internally to educate on how applicants are using Generative AI and the technology’s effect on the outcome of the review. Applicants are always responsible for the integrity of the content they submit: they must ensure that their proposals contain no plagiarized writing or fabricated information, the detection of which will result in a project’s automatic removal from consideration.

Reviewers:

  • Foundation reviewers are prohibited from using Generative AI in any way that threatens the confidentiality of the process.

When an applicant submits a proposal, they make themselves and the communities they work with vulnerable. Reviews in our major programs are double anonymous: nothing protects an applicant’s hard-won insights and data from being expropriated and used. We expect our reviewers to maintain the highest level of confidentiality. Under no circumstances should they upload applications, feedback drafts, notes, or ratings to a Generative AI tool, even if its terms of use state that uploaded information will not be used elsewhere. Additionally, reviewers should not try to determine if Generative AI was used by applicants but evaluate all proposals on their own merits. Reviewers who violate this policy will be subject to immediate dismissal.

Other Programs and Stakeholders:

  • Partners, contractors, or program staff who intersect with broader communities should be transparent in their use of Generative AI and may develop their own policies on a case-by-case basis.

We recognize the range of ways the Foundation seeks to engage the world and appreciate that the stakes for Generative AI may vary across different platforms and contexts. We encourage all in the Foundation’s community to honor the principles outlined in this policy, and, if needed, to develop guidelines appropriate to their work and goals.