AI in Government: A Field-Level Review
AI use in the civic sector is at a very early stage. We evaluated local and state governments’ use of AI to provide a thorough review of current policies and regulations, alongside indicators of willingness to experiment with new tools and practices. This includes an analysis of emerging strategies being put in place at supporting institutions such as universities, philanthropies, professional organizations, and a new crop of coalitions—including ours—that have sprouted up to support AI use in the civic sector. This is a fast-moving field, and we did our best to make sense of the moment amidst a dizzying pace of change.
Our interviews with over 40 practitioners and experts, pilot efforts, and thorough literature review yielded a representative sample that is by no means exhaustive, but provides a coherent picture of where the field is—and where we believe it can go from here.
What’s Happening in States
Legislatures Are Moving to Control AI
Compared to past tech waves, this one has seen a truly unprecedented amount of legislation, regulation, and executive orders. This all happened quite fast and most of it is at the state level with jurisdictions below playing catch-up.
In 2025 alone, over 735 AI bills have been proposed—bringing the total since 2019 to over 1,600 bills, according to our analysis of data generated by the Beeck Center.1
There are three notable aspects of these bills that jump out: the sheer volume, the defensive nature, and the variety of policy areas covered.
First is the considerably large amount of legislation advanced in a short period of time. Based on our analysis of National Conference of State Legislatures data, there has never been a period in state history since the introduction of the internet that saw as much legislation proposed or passed around one technology.2 In fact, when we ran the numbers by field leaders, many commented, “That can’t be right. Is there really that much legislation being debated and passed?” There is. State governments are not sitting on their hands legislatively.
Fully 77 percent of these bills can be classified as controlling legislation (see Figure 1). Rather than a focus on how to advance AI or enable its use, controlling legislation refers to laws that restrict or set guardrails on how AI can be used. Public sector legislation constrains how government agencies adopt AI, while private sector rules set accountability standards for firms that build and deploy it. Of the controlling pieces of legislation, roughly half focus on the public sector and include inventories, procurement audits, or ongoing impact assessments. For example, Maryland’s Artificial Intelligence Governance Act of 2024 (SB818) mandates agency inventories, bans deployment of high-risk systems without safeguards, and creates an executive subcabinet to oversee compliance. The other half of bills focus on controlling the private sector by placing regulations on companies themselves: New York’s pending 2025 AI Act (S1169) would require independent audits of high-risk systems that may impact the public’s health or legal rights, anti-discrimination safeguards, and enforcement by both the state attorney general, as well as allowing private lawsuits. One state administrator summed up this orientation by saying, “For the majority of bills introduced, legislators had some sense of risks to avoid and harms to punish, but very little of direction to push the state towards.”3
For legislation that was not controlling in nature, we assessed the breakdown within specific policy areas. Figure 2 below shows the major areas that the National Conference of State Legislatures tracks. As can be seen, the categories are quite varied, with no one policy area capturing the full attention of state lawmakers.
Workforce Has Become a Priority
Government, especially state government, rarely puts workforce development or job training high on its priority list. Job training is generally thought of as a federally funded area, and in terms of internal training of government employees, it rarely rises to the policy agenda level, even though public administration research has clearly demonstrated the need.
The rapid distribution of AI has made that case better than researchers ever could. A number of the new legislative bills address workforce concerns. Take Texas, which earlier this year enacted two laws (HB3512 and HB2818) that paired internal AI deployment with government staff training, requiring workforce upskilling alongside broader adoption efforts.4
We also observed the workforce commitment in the uptake of InnovateUS courses focused on AI. This is a relatively new philanthropically funded initiative offering free courses in AI basics and administration. The response has been positive. Many state governments are recommending or requiring the courses for all staff.
InnovateUS is now working in close partnership with a dozen states: Maryland, Ohio, New Jersey, Colorado, Arizona, Oregon, Minnesota, Connecticut, California, Pennsylvania, Maine, and Georgia, and at the time of this report’s publication, was about to launch in New York, Tennessee, Illinois, and Massachusetts.5 As of September 2025, more than 110,000 state employees were enrolled in courses, workshops, and certificate programs. In fact, Colorado and a few other states have required completion of InnovateUS’s “Responsible AI for Public Professionals” training in order to access AI tools.6 As J.R. Sloan, Arizona’s chief information officer (CIO), noted, “As AI rapidly develops, it is essential we prepare our workforce with the skills they need to use this technology both safely and effectively. The State of Arizona prioritizes privacy, security, and responsible experimentation with AI technology in its government operations. [The InnovateUS] training aligns with these values, providing proper guidance and guardrails that enable the responsible use of AI.”7
Sandboxes and Walled Gardens
We identified a growing trend among states: developing and executing on an organization-wide approach to AI adoption. This followed a well-defined and sequential approach of establishing a secure environment for experimenting with AI, including evaluating trials; establishing a formal governance structure, set of guidelines, or both; and moving to enterprise-wide adoption.
These state efforts create a safe environment where agencies test generative AI tools under limited risk conditions before moving toward broader adoption. The initiatives fall under different names, such as “sandboxing” or “walled gardens,” with a number of overlapping categories and approaches. Some states use regulatory sandboxes—granting temporary waivers or exemptions so developers can pilot new AI systems under oversight (such as in Utah, Texas, and Delaware).8 Others use internal walled gardens inside the government, where employees can experiment with commercial AI tools in a secure enterprise environment (California, New Jersey, Colorado, and Pennsylvania).9 And a third model relies on executive orders or state innovation labs to coordinate ad-hoc pilots (Washington, Maryland, and Georgia).10
At least seven states have created sandbox or pilot structures: Utah, Texas, Delaware, California, New Jersey, Colorado, and Pennsylvania.11 A handful of others (Washington, Maryland, Georgia, and New York) are experimenting through innovation labs or agency pilots—though these don’t fit the formal definition of sandbox programs.12 Several states have also attempted to enact sandbox programs, but the proposals either stalled or failed to pass (Oklahoma, Connecticut, Mississippi, Missouri).13
So far, the actual usage of AI in these programs has been pragmatic and modest. Most pilots have involved off-the-shelf tools deployed to improve government operations rather than cutting-edge AI applications. The most common uses are helping employees draft memos, generate translations, or distill dense rulebooks and human resources handbooks.
What’s Happening in Cities
Unlike states, cities have focused on stand-alone pilots rather than enterprise-wide pilots. Cities tend to identify priority areas (e.g., wildfires in Los Angeles) and build AI tools to solve immediate problems, whereas states are establishing platforms for more general experimentation.
In order to understand what was happening on the city level, we conducted a field scan of AI-related projects. The most notable aspect of our scan was the variety of cases to choose from. When we began this work in the summer of 2023, we struggled to find many advanced AI use cases at the municipal level; that was no longer the case in the first half of 2025. Our selection criteria included projects that specifically mentioned generative AI, aimed for scale, and received some level of local or national news coverage. We identified 12 use cases and pilots and placed them into one of six application areas: permitting and automation, employee productivity, public safety, resident services, public engagement, and infrastructure analytics.
Table 1 offers a snapshot of the range of pilots taking shape locally. Taken as a whole, it reveals growing ambition in the past couple of years. Cities are engaging a wide set of partners on a disparate range of challenges. They represent the kinds of projects that are achievable at this moment, even with little in-house AI talent.
The Experience of AI Adoption
To complement the data analysis above, we conducted a qualitative analysis based on over 40 interviews with government actors (current and former officials), academics who write about government, and related practitioners in business and philanthropy fields. Here is what we discovered.
Locales Are Still Making Sense of AI
States are proposing legislation on an order never before seen, and AI experimentation is emerging in virtually every corner of the country, but people in the civic sector are very cautious. Most states we spoke with are still in the sandbox/walled garden phase of experimentation, and while expanding quickly, it is still only a handful of cities developing major AI pilots. And, most of this activity is internal to government. There is certainly enthusiasm, but generally, this is a field that is still making sense of the technology.
Lack of trust and vision for leveraging AI: Much of this is attributable to the fact that AI is not as intuitive as other tech waves. As Aimee Sprung of Microsoft noted, unlike during the smart cities era when many were dazzled by the possibilities, “Now we fundamentally have an environment in which there is a challenge of understanding the capabilities of the technology. Yes, the public needs to trust AI, but so does government.”14 As one local government professional association leader said, “Cities are tiptoeing forward. If anything, we have gone backwards in the past year—there was a lot of interest in 2023 and then it leveled off.”15
Ongoing state legislative activity is not helping to advance local-level AI. Many officials characterized the 1,600 bills as background noise that doesn’t add up to a strategy and lacks clarity in terms of how to proceed. Some officials noted that they are eager for the coherent guidance that President Joseph Biden’s October 2023 AI Executive Order began to provide. Since the termination of that order, the path forward has felt murky. One state leader noted, “There is a narrative vacuum filled by different voices—voices from the legislature wanting a strategy, agencies trying their own thing, and public comments [urging us to do a better job].”16
Need for technical capacity and infrastructure: A challenge that a number of interviewees brought up was the persistent lack of foundational tech capacity to equip them for successful AI experimentation, especially at the city level. A professional association leader said, “I hear all the time from our city members that we lack the capacity and the tech talent.”17 Another local official stated flatly, “We aren’t doing much with AI because we don’t have the infrastructure in place to execute.”18 We also heard from a few university professors wanting to provide pro bono assistance who couldn’t even get their calls returned. One professor said, “Trying to work with city hall [on AI] is hard because they are so woefully understaffed. Even project scoping takes time.”19
Part of the reason for local government lag is that the value and return-on-investment for AI isn’t clear yet. A typical tech innovation life cycle has a large number of adopters at the outset that try it out, then attrition, followed by much more practical uses at more affordable prices. But at the municipal level, there is a dynamic of waiting for the AI cycle to play out. One professional organization official said, “Cities want to see a full proof of concept for these tools before experimenting. Right now [AI] is still vague and ambiguous.”20 Santi Garces, Boston’s CIO, noted, “Enterprise software can be [exceedingly expensive]. Costs are utterly misaligned with public sector budgeting realities. We are still [waiting] for the private sector to refine and ‘productize’ the technology.”21
Early stages of greater experimentation: Despite these challenges, there are indeed locales advancing homegrown AI efforts. Where this has happened we have seen a dynamic of employing what one field leader called “minimum viable governance”—quickly cobbling together a permission structure for administrators to safely use AI. Indeed, that is what states are doing with their walled garden approach: establishing a secure environment to try things out and see what works.
At the city level, a few locales are taking this approach. As Kat Hartman, chief data officer of Detroit, put it: “We cannot regulate something we do not understand. Regarding AI, we must roll up our sleeves and learn to do it ourselves, internally. This is the only way to build up true knowledge and literacy at the level of local government—whether we build or we buy.”22
The cases of experimentation are notable, but it is apparent that most local AI activity is still around basic automation and productivity. As Stephen Goldsmith, Harvard professor and former mayor of Indianapolis, noted, “One of the greatest challenges with local government is a lack of imagination.”23 And Lane Dilg, former Santa Monica city manager who led policy partnerships for OpenAI, said, “Basic productivity gains are far easier to achieve than ambitious transformations, like rethinking an entire permitting process.”24
As local-level AI is more focused on productivity and making sense of the technology, we found it is CIOs who are often leading at both the state and city levels. Although in more recent tech-driven waves, a few CIOs have been in the forefront, typically big ideas and vision emanated from mayors (think Michael Bloomberg and Pete Buttigieg) or highly visible strategy chiefs like innovation or chief data officers. This time, most leadership is coming from more backend, system-wide administrators.
Albert Gehami, one of the founders of an international peer group called the GovAI Coalition and the city privacy and AI officer in San Jose, explains that the City has published its own analysis and conducted its own resident outreach. “We are issuing reports with honesty about what worked and what didn’t. And we are talking to residents. Resident outreach is not a typical IT [information technology] function, but it is showing results. This is [all] new and scary, but technical staff need to get out there.”25
This new role is being recognized by many field leaders. Erin McKinney from Amazon Web Services noted, “CIOs have historically been charged with implementing new state policies, and are now writing the rules for how state and local government agencies adopt AI. CIOs are stepping into leadership roles that are increasingly innovative and strategic, not just operational.”26
Taken together, local governments are taking their time to get to know AI, begin the process of incorporating it into daily routines, and start to think about more ambitious, potentially transformative uses. One state administrator summed it up by saying, “Most of us are primarily—and necessarily—crossing the river by feeling the stones, but would appreciate pragmatic and positive visions to work towards.”27
Supporting Institutions
Local governments, particularly cities, have always been adept at forming partnerships. Working in concert with the business community and chambers of commerce has driven local growth and development efforts for decades. More recently, university and philanthropic collaboratives have become helpful. Local governments at all levels are eager for new and stronger partnerships around AI. In fact, we decided to draft this section about such institutions because it came up in virtually every interview we conducted. Interviewees across the board noted that partners—particularly in higher education—could provide meaningful help with AI guidance and evaluation of use. Below is our assessment of various institutions that could potentially partner with local governments.
Federal Government
Few at the local level ever thought of the federal government as an AI problem solver. But there was fairly wide recognition that the Biden administration’s 2023 AI Executive Order provided helpful guidance and strongly worded guardrails for those at the sub-national level. While our interviewees did not discuss the Trump administration’s position on AI (or lack thereof), many expressed disappointment toward the absence of national-level tone setting and framing. The current administration may ultimately release a stance that preempts many state AI bills that have been passed and are being debated. But in our scan, that did not raise many flags, as local-level officials were more focused on setting their own plan and strategy without federal support.
Professional Government Organizations
There are over half a dozen major professional organizations serving state, county, and municipal governments in the United States. These entities provide a range of services, including peer networking, training, and policy advocacy agendas. They tend to follow their members’ lead and shift programming accordingly. As local governments are still making sense of AI, professional organizations, not surprisingly, have yet to take strong positions or provide much technical or operational guidance. A 2024 International City/County Management Association (ICMA) member survey found nearly 50 percent of the respondents reported AI utilization as a low priority, and only about 5 percent placed a high priority on AI use.28
In fact, a couple of the professional organizations have ratcheted down their efforts over the past two years. At the time of publication, we found that this tide is turning, as evidenced by the recent spike in city use cases noted in the previous section. Tad McGalliard, managing director for research at ICMA, noted that in a recently updated internal survey, AI was the number one issue members want addressed. He said, “AI policy and practice diffusion hasn’t happened, but in just the past year, interest has skyrocketed. Everyone now knows they need to do something.”29
Philanthropy
Private philanthropy since the 1950s has mostly steered clear of directly supporting public sector efforts in favor of targeted nonprofit funding, but when it does wade into new areas it can have an enormous impact, as its funds are flexible and seen as creative capital in the civic sector. In that regard, philanthropy could significantly influence where the field goes next.
Our scan of philanthropy revealed that, as for other sectors, its approach to AI is still taking shape. Organizations are still wrapping their heads around the new technology and trying to assess how it might impact them internally and how they can support their grantees. Jean Westrick, executive director of the Technology Association of Grantmakers, noted that “AI adoption in philanthropy is marked by a paradox: individual use is high, but enterprise-wide strategies remain limited. Funders are exploring how AI benefits their operations, yet too few are investing in their nonprofit partners. Without intentional support, this imbalance risks widening the technology equity gap—leaving the very organizations closest to communities behind.”30
There are a few notable exceptions, as philanthropy around AI has grown both in scale and urgency since the 2022 release of ChatGPT. A few large foundations and mega-donors are moving billions toward AI safety, governance, and public interest applications. Similarly, new multiyear coalitions like the $1 billion NextLadder Ventures initiative for frontline workers, and major corporate commitments such as Microsoft’s $4 billion for AI education, are aiming to diffuse the use of AI across society.31 Bloomberg Philanthropies is supporting a few different city AI initiatives, and Robin Hood has created an AI Poverty Challenge.32 The result is an ecosystem where funders are seeking ambitious, scalable ideas that don’t just react to the risks of AI but shape the infrastructure, workforce, and governance frameworks that will define its societal impact.
These shifts open up space for initiatives that move from responding to present needs toward forecasting and preparing for future risks and opportunities. At the same time, trends toward trust-based philanthropy and capacity-building align well with the idea of positioning communities to lead. Many funders are shifting to longer-term, flexible support with reduced reporting burdens—an opening for community-led initiatives to be models for field-building, training, and public interest AI.
Higher Education
Colleges and universities have an uneven record of working with local governments and communities. In many respects they have a different mission; as Kate Burns from MetroLab noted, “Fundamentally, research does not have a client.”33
Our assessment of higher education revealed considerably more activity than with other supporting institutions. Virtually every top-rated research institution has some combination of a presidential task force, a newly created institute, AI faculty hires, and additional academic programs focused on AI use. A few institutions are home to multimillion dollar new investments including $500 million for the Kempner Institute for AI at Harvard, $1.2 billion from public and national lab resources for an AI research facility at the University of Michigan, $20 million from New York State and IBM into AI efforts in the State University of New York System, and University of California, Berkeley’s brand new College of Computing, Data Science, and Society.34
While there is tremendous investment in AI at many universities, there hasn’t been a particular uptick in civic AI commitments at the institutional level. But we did find a few notable, if isolated, examples of higher education playing an outsized role in advancing new AI efforts with local government and communities. These were high-impact yet low-fanfare initiatives that came together quickly over the past couple of years. Each one used AI as a tool to quickly surface value and change the traditional relationship between universities and surrounding communities:
- Two different experiential learning programs have rapidly become more focused on policy impact with AI. At Tulane University, the mandatory undergraduate public service requirement has been known to focus on basic community outreach, such as tutoring and park beautification. But Aron Culotta and Nicholas Mattei, two enterprising computer science professors and directors at the Center for Community-Engaged AI, work with local nonprofits and undergraduate students to advance significant criminal justice reforms.35
- At Northeastern University, the co-op program providing students with work experience is widely recognized as one of the best administered in the country, but to date, it has focused almost exclusively on the private sector. Now, through the new AI for Impact program, there is a shift to public partnership that has quickly yielded concrete results. In six-month sprints, the program has built 18 generative AI production-ready solutions, co-designed with community partners for the governor of Massachusetts and other civic leaders in the state.36
- Georgia Tech houses the Partnership for Innovation, a public–private partnership aimed at boosting economic growth. One of its programs aligns regional universities with local challenges. Its founding executive director, Debra Lam, noted that AI was an excellent vehicle that proved useful. She said, “We don’t lead with tech, but with problems. We have a number of projects, including working with rural farmers, in which AI was able to meet the need and get to results in short order.”37
- At the University of Michigan, working on a National Science Foundation grant, a group of scholars who had never engaged with the public sector found that within days, they were able to develop new AI-powered solutions for the Detroit city government around urban planning and climate change.
Emerging Coalitions
As with higher ed, in coalition-building we saw a similar dynamic of new endeavors and pairings coming together to quickly advance AI efforts. New enterprises like InnovateUS, Bloomberg’s CityAI Connect, the GovAI Coalition, and the Digital Services Network have shifted the model from expert-led networks to more peer-driven movements (see Table 2).
The GovAI Coalition, for example, brings together over 800 local, state, and national government agencies around a shared set of principles to create common ground in how they engage with vendors that are approaching local governments with new intensity.38 This network provides members with the leverage needed to negotiate in the best interest of constituents.
Now that a field around civic change-making has been established, its movement has shifted from one of big announcements to showcasing more practical steps—and these new associations are leading the way.
Emerging Themes
AI is indeed taking shape in the civic sector, but little has been settled. While there has been significant progress, AI implementation and integration are still very much in a Wild West phase.
Some themes that emerged from this field review include:
- There is a vision vacuum: Policy, training, and experimentation are now advancing quickly. But there is a palpable narrative void. There are no set paradigms, example strategies, or governance frameworks like the ones that quickly took root in the smart cities and big data eras.
- Implication: There is demand for a clear AI operating framework.
- Many governments are employing an organization-wide orientation: There is a greater openness to an all-of-government approach, not just one that is contained to IT offices. States in particular are getting strategic and enterprise-oriented.
- Implication: Major reform can take root throughout government and at the departmental level, not just through specific tech applications.
- Partnerships are in high demand: The true impact of this technology will not be contained to any one organization. Everyone is eager to learn and to reduce the risk of AI. There is a necessary orientation to collaborate with a range of civic actors.
- Implication: There is a greater chance of establishing a collaborative ecosystem of civic life beyond one dominated by or centered around government.
These are dynamics that point to openness to—and in some cases demand for—a new governance framework. A number of new resources and partners are waiting in the wings to assist.
The big question is: Where do we go from here—how do we use AI for (significant) good, and guard against (significant) harm?
Citations
- Beeck Center, “AI Legislation Database,” source.
- The few times in which anything close to as much tech legislation was advanced include 200 bills focused on data privacy in 2020, and the brief legislative spurt following President Joseph Biden’s 2021 Infrastructure Investment and Jobs Act, which ensured investments in local broadband were effectively utilized at the state level.
- Interview with state administrator, September 22, 2025.
- “From Red Tape to Results: What Virginia’s AI-Powered Reform Could Mean for Texas,” Texas Policy Research, July 28, 2025, source.
- InnovateUS, source.
- “Implementing Responsible AI in Government,” source.
- InnovateUS, source; “State of Arizona Implements Employee Gen AI Training,” Arizona Department of Administration, February 24, 2025, source.
- Stuart D. Levi et al., “Utah Becomes First State To Enact AI-Centric Consumer Protection Law,” Skadden, April 5, 2024, source; “Texas Signs Responsible AI Governance Act Into Law,” Latham & Watkins, June 23, 2025, source; “Delaware Launches Bold AI Sandbox Initiative,” Delaware Prosperity Partnership, July 23, 2025, source.
- Governor Gavin Newsom, “Governor Newsom Deploys First-in-the-Nation GenAI Technologies to Improve Efficiency in State Government,” State of California, source; Julia Edinger, “New Jersey Advances AI Through an Economic Development Lens,” Government Technology, April 21, 2025, source; Jordan Anderson, “The Colorado AI Act: What You Need to Know,” Baker Tilly, September 10, 2024, source; Justin Sweitzer, “Shapiro Says AI Has ‘Real Promise’ as He Unveils Pilot Program Results,” City & State Pennsylvania, March 21, 2025, source.
- Sophia Fox-Sowell, “New Washington Bill Would Let State Workers Influence How Agencies Use AI,” StateScoop, February 13, 2025, source; News Staff, “Maryland Submits AI Strategy, Guide to General Assembly,” Government Technology, January 14, 2025, source; Nikhil Deshpande, “State of Georgia: AI Roadmap and Governance Framework,” Georgia Office of Artificial Intelligence, source.
- One-U Responsible AI Initiative, “Responsible AI Community Consortium,” University of Utah, source; Matthew Ferraro and Anna Z. Saber, “Texas Just Created A New Model for State AI Regulation,” Tech Policy Press, July 17, 2025, source; Skip Descant, “Delaware AI Commission Approves Creating Agentic AI Sandbox,” Government Technology, July 28, 2025, source; Khari Johnson, “New Washington Bill Would Let State Workers Influence How Agencies Use AI,” Cal Matters, March 21, 2024, source; Chris Teale, “States Vie for Leadership Role on AI,” Route Fifty, January 17, 2024, source; Keely Quinlan, “Trust, Innovation Central to Colorado’s New AI Guidelines, Says State Data Chief,” StateScoop, September 9, 2024, source; Sweitzer, “Shapiro Says AI Has ‘Real Promise,’” source.
- Colin Wood, “Washington Governor Signs AI Order Plotting Yearlong Policy Path,” StateScoop, January 30, 2024, source; Department of Information Technology, “Office of AI Enablement,” State of Maryland, source; “State of Georgia: AI Roadmap and Governance Framework,” source; Office of the New York State Comptroller, New York State Artificial Intelligence Governance (New York Division of State Government Accountability, 2025), source.
- “OK HB1916 | 2025 | Regular Session,” LegiScan, source; Angela Eichhorst, “What Are the New AI Laws in Connecticut?,” CT Mirror, July 14, 2025, source; “MS HB1535 | 2025 | Regular Session,” LegiScan, source; “MO HB1606,” Bill Track 50, source.
- Interview with Aimee Sprung, July 28, 2025.
- Interview with local government professional association leader, August 7, 2025.
- Interview with state leader, September 19, 2025.
- Interview with local government professional association leader, January 27, 2025.
- Interview with local official, June 5, 2025.
- Interview with university professor, May 29, 2025.
- Interview with local government professional association leader, August 7, 2025.
- Interview with Santi Garces, February 5, 2025.
- Interview with Kat Hartman, August 8, 2025.
- Interview with Stephen Goldsmith, March 26, 2025.
- Interview with Lane Dilg, August 7, 2025.
- Interview with Albert Gehami, January 22, 2025.
- Interview with Erin McKinney, August 12, 2025.
- Interview with state administrator, September 22, 2025
- International City/County Management Association (ICMA), Artificial Intelligence in Local Government (ICMA, 2024), source.
- Interview with Tad McGalliard, September 19, 2025.
- Interview with Jean Westrick, August 21, 2025.
- Thalia Beaty, “Funders Commit $1 Billion Toward Developing AI Tools for Frontline Workers,” Chronicle of Philanthropy, July 17, 2025, source; Natasha Singer, “Microsoft Pledges $4 Billion Toward A.I. Education,” New York Times, July 9, 2025, source.
- Bloomberg Center for Government Excellence, “Welcome to CityAI Connect,” John Hopkins University, source; Bloomberg Center for Public Innovation, “Bloomberg Philanthropies City Data Alliance,” John Hopkins, source; “AI Poverty Challenge,” Robin Hood, source.
- Interview with Kate Burns, October 28, 2024.
- Jonathan Shaw, “Chan Zuckerberg Commits $500 Million to Harvard Neuroscience and AI Institute,” Harvard Magazine, December 7, 2021, source; Michigan Engineering, “U-Michigan Announces Most Advanced AI Research Complex with Historic Los Alamos Alliance,” Michigan Engineering News, February 3, 2025, source; Governor Kathy Hochul, “Governor Hochul Announces $20 Million Public-Private Investment to Advance Artificial Intelligence Goals,“ New York State, October 16, 2023, source; Rachel Leven, “UC Berkeley College of Computing, Data Science, and Society Established,” University of California, Berkeley, May 18, 2023, source.
- “Center for Community-Engaged Artificial Intelligence,” Tulane University, source.
- Burnes Center for Social Change, “AI for Impact Program,” Northeastern University, source.
- Interview with Deb Lam, July 25, 2025.
- “Government AI Coalition,” City of San Jose, source.