Tech for the People, by the People: Using AI to Solve Environmental Problems with JR Washebek

Blog Post
The title of this blog next to a headshot of JR
Dec. 4, 2025

“Tech for the People, by the People” is a series featuring conversations with individuals who want tech to benefit the public—not solely the powerful. Instead of innovation for innovation’s sake, our guests prioritize socially-responsible innovation that’s shaped by us for us. From environmental scientists to fiction writers, there isn’t one kind of profession or set of work that contributes to tech for the public interest; that’s what this series sets out to show.

Kicking off this series, we’re joined by JR Washebek, one of the Digital Service for the Planet (DSP) fellows. This cohort of fellows is made up of technologists, former public servants, and domain experts who want to improve how government uses tech and data to deliver on urgent environmental priorities. JR works on problems at the nexus of AI, environment, and society, focusing on projects designed to navigate the conflict between our immediate needs and long-term ecological health. We spoke with her about why she was drawn to this nexus, what she hopes to accomplish through the DSP fellowship, and what her hopes are for a future with AI.

The following conversation has been edited for brevity and clarity.

Emily Tavenner: During the span of your career in conservation, what sparked for you the interest in intentionally intersecting government conservation work and technology?

JR Washebek: There wasn’t so much as one moment, but rather a number of experiences and interests that converged. I grew up on the outskirts of the Cradle of Forestry in America and was steeped in the legacy of conservation from a really young age (I named my first teddy bear Teddy Roosevelt). At the same time, I loved neuroscience and the idea of influencing behavior. At Furman University's neuroscience lab, I helped conduct studies on mice to determine how beta endorphins affect brain development over time. My job on that research team was to code the visual intelligence program to recognize the video-recorded mouse behaviors, essentially teaching a machine to see patterns that would take a human researcher years to observe manually.

That experience planted something in my mind that I was only beginning to understand: that technology can help us see macropatterns in data that otherwise would go unnoticed, and the right analytical tools can compress a research timeline from years to months.

I then left that field to pursue something entirely different: I worked for the Conservation Corps for 6 years. We were doing conservation projects, collecting data, and having a physically tangible impact on the landscape. But that earlier insight kept surfacing. I’d be out in the field and find myself thinking about the systems of data we were intersecting with—the patterns underneath the work that no one was capturing—and how that data might help make more targeted, impactful changes not only to the landscape but also to human behavior.

I transitioned to working at the Forest Service, where there’s this dichotomy of folks who are “boots on the ground” versus programmatic folks that handle data. Neither group solely resonated with me—I saw myself somewhere in the middle, or maybe as a bridge between the two.

I worked in a variety of roles at the Service starting as a conservation education specialist, but eventually moving to Washington, DC, where I worked in the Office of the Chief. A portion of my time there was spent coaching leadership to not only use new technologies, but to also build new technologies to support the Service’s workforce and to manage environmental issues in real-time. So when generative AI publicly came on the scene, I was ready. I put together all these briefing decks about how artificial intelligence was going to change the landscape of environmental management—and the very fabric of society. That led to building an AI team to pilot AI and machine learning techniques across different high-priority processes that needed modernization and creating an AI literacy bootcamp that educated 3,000 people across the agency.

Looking back, the path makes sense in a way it didn’t while I was on it. I’d spent years learning to see patterns—first through a microscope, then through data systems, then through organizational structures—and AI was just the latest, most powerful lens.

Tavenner: Why did you apply to be a DSP fellow?

Washebek: The data and processes that lead to extractive and environmentally damaging actions are the same data and processes that could lead to environmental restoration and mitigation—to the pro-social, pro-environmental outcomes that I feel aligned with. The DSP fellowship was targeting people who have experience in policy and government and who see the environmental tech issue as something that matters. That resonated with me, but so did who else was in the room. There's a lot of people that left the government. There's also a lot of people that have left private companies—people that were developing some of the tools that will be used by the government for years to come. DSP seemed to be rallying them all to a cause that I felt morally aligned with: making sure those tools get pointed in the right direction.

Looking back, the path makes sense in a way it didn’t while I was on it. I’d spent years learning to see patterns—first through a microscope, then through data systems, then through organizational structures—and AI was just the latest, most powerful lens.

Tavenner: What’s missing in how data is leveraged for conservation issues? Why do you think the U.S. needs a group like the DSP fellows working together right now in particular?

Washebek: So working together is really important. Solely creating interoperable data and seeing that all in one system doesn't create social change. People are still persuaded most by other people. Creative solutions that we need come from talking to each other about our experiences, the way that we see the world, and making new connections across those differences. There's a lot of brilliant people out there that I want to meet through DSP and work with to scale the change that we want to make. I'm excited to meet other people that have worked from different angles in government on some of these same issues.

Why do we need DSP now? There are two converting urgencies:

The first is generative AI itself. I feel strongly that it will change the way that we interact with each other—genuine social connection will become harder to attain and critical thinking about environmental issues will become more siloed to individuals, if it happens at all. The systems intelligence needed to understand how environmental systems work—all these things are at risk. LLMs can propagate information and change the nature of truth. That's an urgency factor that I think about constantly. It's more important than ever for us to be talking to each other about these issues, face to face, before that becomes harder to do.

The second is that climate change is accelerating landscape changes at a rate that our federal administrative processes were never designed to address. We need adaptive ecosystem management now—using advanced technologies like sensor-embedded networks to understand landscape conditions in real-time and to hold organizations accountable to the restoration and mitigation efforts that they have committed to doing in their environmental permits. We've never had that ability before. And yet the data visualization and stakeholder communication are still inadequate to support democratic decision-making. We need those dashboards, highly-accessible public-facing visualizations, so people can understand what's happening on public lands right now.

Here’s the frustrating irony: There’s also insufficient use of AI for conservation purposes. One of my most frustrating moments as an AI program manager in the Forest Service was collecting brilliant ideas from our workforce about how generative AI could be used from people who did not have a clear idea of what it was, because we were prohibited strictly from using it. We need to be identifying high-return applications for the generative AI that exists today. I believe it's a national imperative. AI data centers have real impacts on communities and environments, and we owe it to those communities to figure out how to deploy this technology for conservation and preservation purposes–not just extraction.

The AI revolution is going to redistribute power—that alarms me, and it also makes me hopeful. Because if we move early, if we develop our own AI literacy and build these tools for our communities rather than waiting for them to be built for us, we get to shape what that redistribution looks like.

Tavenner: What do you want to accomplish through this fellowship?

Washebek: I'm deeply interested in environmental ethics–and I think it’s more technically rigorous than people assume. Generative AI is built on neural networks and machine learning: systems that learn patterns from data by adjusting quantitative weights across millions of parameters. Those weights aren’t abstractions—they directly shape how a model reasons, what it prioritizes, and what it gets wrong. Alignment is a technical problem, not a philosophical one. So when I talk about developing frameworks for environmental ethics in AI, I mean explicitly building that ethics into the architecture itself. There are some researchers doing this already, but there's nothing that is authoritative or governing yet in this space. It's alarming, especially if we’re going to deploy these technologies to make or influence environmental decisions.

Having built the Forest Service’s first AI program, I see how current AI systems lack coherent approaches to environmental trade-offs and intergenerational thinking. I want to use the fellowship to develop both the technical architecture and the governance frameworks to ensure that AI systems can appropriately represent non-human interests and long-term environmental considerations. This is the foundational infrastructure for the next decade of AI-enabled environmental decision-making.

I also want to create some practical demonstrations of what outcomes-based environmental monitoring can look like. The traditional environmental assessment methods are too slow and resource-intensive for the pace of change we're facing. I want to pilot AI-enabled systems that can provide real-time environmental outcomes tracking, moving beyond just the compliance-based monitoring that we have available that is often laggard and misaligned with actual results. That will revolutionize how we approach everything.

I want to build a framework for matching environmental damage with the most effective restoration approaches so that when we extract natural resources, we’re systematically deploying optimal restoration strategies based on all the data available to us.

But underneath all of this is something more fundamental: I want to establish myself as a bridge between the technical AI community and environmental practitioners. Environmental practitioners have the wisdom and the systems intelligence that only comes from living and working on a landscape for years. The technical AI community needs to see and value that knowledge, to understand that these tools have pro-social and pro-environmental applications and that they have a hand in guiding those applications. That bridge doesn’t exist yet. I want to help build it.

Tavenner: The work of this group is meant to improve how we build and manage environmental data and technology across sectors. If you were to envision a future in which environmental data and technology were being used seamlessly so that conservation efforts are successful, what would that look like?

Washebek: Here’s what the future looks like in practice: an ecosystem health dashboard that any environmental manager can use. Community-based monitoring that puts real tools in the hands of citizen scientists—I love citizen science! If you live in a place where critical minerals are being mined, you’d have the ability to verify that work is being done with the best available science, hold the organizations accountable for their commitments, and there is an equal—or even more powerful—environmental restoration tied to that project. You’d be able to monitor conditions yourself, and push for accountability beyond the traditional levers like litigation.

The technology to do this exists now. What’s missing is the infrastructure, the frameworks, and the people who can build them.

That’s what draws me to this work. The AI revolution is going to redistribute power—that alarms me, and it also makes me hopeful. Because if we move early, if we develop our own AI literacy and build these tools for our communities rather than waiting for them to be built for us, we get to shape what that redistribution looks like.

Related Topics
Artificial Intelligence