Episode 8: Tech Governance in Trump’s First 100 Days

Lilian Coral, Cody Venzke, Neil Chilson, and Shane Tews discuss tech policy in the Trump’s second term so far.
Podcast
red, white, black and blue chopped up imaged of the US Capitol building
April 30, 2025

What actions did the Trump administration take regarding tech policy in its first 100 days? Lilian Coral (New America), Cody Venzke (ACLU), Neil Chilson (Abundance Institute), and Shane Tews (American Enterprise Institute) discuss on this episode of Democracy Deciphered.

Listen to this episode on Apple Podcasts and Spotify.

Transcript

Shannon Lynch: Welcome to Democracy Deciphered, the podcast where we analyze the history, present, and future of American democracy. For this episode, you’ll hear my New America colleague Lilan Coral as she moderates a panel discussion about tech policy during the first 100 days of the Trump administration's second term.

Lilian Coral is the Vice President of Technology and Democracy Programs and Head of the Open Technology Institute at New America. She leads initiatives at the intersection of technology, policy, and democracy, shaping policies on AI governance, digital equity, and public interest technology. Previously, she was the Director of National Strategy + Technology Innovation at the Knight Foundation and the Chief Data Officer for Los Angeles Mayor Eric Garcetti. With nearly 20 years of experience across government, philanthropy, and civic tech, Lilian is passionate about using technology for social good. She holds degrees from UC Irvine and UCLA.

Joining Lilian today is Cody Venzke, a Senior Policy Counsel in the ACLU's National Political Advocacy Department, working on issues in surveillance, privacy, and technology. He is an author of a treatise published by LexisNexis on education data and student privacy.

Prior to joining the ACLU, Cody worked as a Senior Counsel at the Center for Democracy & Technology. He also served as an Attorney Advisor and Honors Attorney at the Federal Communications Commission and clerked for federal judges on the Third Circuit and the Eastern District of Pennsylvania. Cody previously worked as a litigator with an international law firm, where he served clients in emerging technology. In his pro bono work, Cody has represented tenants in eviction actions and assisted applicants under the U visa program. Cody graduated from Stanford Law School and St. Olaf College.

Also joining today’s panel is Neil Chilson. Neil is a lawyer, computer scientist, author, and Head of AI Policy at the Abundance Institute.

Previously, Chilson was the Federal Trade Commission’s chief technologist. In this capacity, he focused on understanding the economics of privacy, convening a workshop on informational injury, and establishing the FTC’s Blockchain Working Group. He is also a regular contributor to multiple news outlets, including the Washington Post, USA Today, and Newsweek.

Chilson holds a law degree from the George Washington University Law School and a master’s degree in computer science from the University of Illinois, Urbana-Champaign. He received his bachelor’s degree in computer science from Harding University.

Lastly, Lilian will be joined by Shane Tews. Shane is a nonresident senior fellow at the American Enterprise Institute, where she focuses on digital economy issues. She is also president of Logan Circle Strategies, a strategic advisory firm.

Previously, Shane served as co-chair of the Internet Governance Forum USA. She was a member of the board of the Information Technology and Innovation Foundation, the Information Technology Industry Council, and Global Women’s Innovation Network. She began her career in the George H. W. Bush White House as a deputy associate director in the Office of Cabinet Affairs and later moved to Capitol Hill as a legislative director for Representative Gary Franks of Connecticut.

Shane studied communications at Arizona State University and American University.

Without further ado, please enjoy this enlightening panel discussion!

Lilian Coral: Well, welcome, Cody, Shane, and Neil. Today, we plan to explore how technology, democracy, and public policy intersect as we examine the first 100 days of the second Trump administration. Very pleased to be having this discussion with the three of you amazing thinkers in this space. And I'm hoping that together we'll unpack some early surprises, analyze potential congressional moves, and also talk a little bit about the role of civil society and ask just what does a democratic tech future look like or should look like at this time.

So let's get into the first question. I mean, let's start with what was to be expected. I know that within New America's Open Technology Institute, we anticipated certain rollbacks, like the Biden era AI executive orders and guidance being reversed. And we also knew that the administration had already signaled an intent to reshape Section 230 and some early steps to target certain media was not a surprise.

But even so, there's been some genuinely surprising developments. For example, while we were cautiously encouraged by the new executive order on AI continuing to emphasize trustworthy and AI civil liberties, it's not totally clear how this administration is going to actually execute on those principles. And there has been also a bit of some shocking moves in terms of the scope and brazenness of DOGE.

Why don't we start with you, Cody? What did you expect? What's taken you by surprise?

Cody Venzke: Yeah, I think coming into the Trump administration, many of us expected it to be a deregulatory administration, both given the president's comments after the election about dismantling many of the woke DEI regulations that the Biden administration supposedly put into place as well as the fact that at the inauguration, immediately behind the president were the most powerful tech executives in the country.

And we saw that to a large degree. We've seen the Trump administration in its early days, for example, issue executive orders about building American AI in order to compete with China supposedly. But there have been a few surprises along the way. And I think one is that you alluded to Lillian is the Trump administration's maintenance of certain civil rights and civil liberties protections in federal use of artificial intelligence.

In the Biden administration, the Office of Management and Budget issued a piece of guidance that largely required federal agencies to mitigate the risk posed by AI before federal agencies deployed them. And in his first executive order, President Trump ordered OMB to re-examine that memo. And many of us thought that was going to be the end of those requirements. But it turns out OMB, in sort of a surprise, maintained many of those protections.

I was glad to see that because I do think it's consistent with the administration's vision that in order for America to be a leader in the artificial intelligence space, the American people need to trust artificial intelligence. And vetting federal uses of AI to make sure that they're safe, they're effective, that they don't result in discriminatory harms is a key part of that.

Lilian Coral: Yeah, definitely. Neil, how about you? Did you find that particularly surprising? And then are there other areas of technology governance that you've seen that sort of taken you by surprise?

Neil Chilson: Yeah. So I think—I agree a lot with what Cody said that sort of change to a deregulatory administration relative to the Biden administration was expected. I actually did a blog post on my Substack that laid out like 10 things that I thought were going to happen in AI policy. And I think basically all of those are true. You know, we saw the EO get repealed. We saw the redirection from bias and safety to a sort of national strategy, even though some of the, as Cody pointed out, some of the methodologies are there, the reframing has really been around what the goals of those methodologies are. So, having trustworthy tech is not so much in service of civil rights but is in service of having an effective administration and effective government. So there's been a sort of change in goal, even if the methodology looks quite similar in some places. I think, you know, we expected that energy would become a much more prominent part of the discussion. And I think we're seeing that in some ways, energy is now tech policy, which is interesting. And I think we can talk about what that means for across the group.

But there have been some surprises, both from my point of view, welcome, as well as some, you know, maybe less welcome ones. I thought it was very surprising, you know, Vice President Vance's speech in Paris at the AI Summit was a real barn burner and was very, very critical of the European approach to tech policy—very much saying that Europe was stagnating because of its regulatory approach, its heavy regulatory approach. But then also saying that—and this is not an administration that has a great love necessarily for tech companies, big tech companies—saying that Europe was unfairly focusing its enforcement on big American tech companies. I think that was a big surprise; to me anyways. Another surprise is there's been some surprising—on the other hand, there's been some surprising continuity in say, antitrust policy that I think people, maybe you could see the tea leaves of that, but it wasn't quite clear how it was gonna shake out. And thus far we've seen a little more continuity with the Biden FTC, for example, than I think many people expected. So those are some of the surprises that I've seen. And then on tariffs, obviously, big surprise about how that's all played out. The Biden diffusion rule: the fact that that still exists and might go into effect; I think that's a big surprise as well. Although that one hasn't played out all the way. So we'll see. It's been an exciting couple of months, exciting and surprising.

Lilian Coral: Yeah. Shane, don't know if you want to add anything on this one.

Shane Tews: Yeah, so I think the week of the inaugural having DeepSeek come out really was a game changer because we thought we knew the landscape of what we were dealing with and then all of a sudden China came in and said, “maybe it's not as expensive as we think to get these together.” How far and how quickly we've seen just Chinese companies use their own technology and promote from within, which we didn't—I don't think we understood the extent to which they had those capabilities.

So that's really a dynamic changer for us who thought we had full command of this change in technology, and we need to put that into place. So that also means that while we have this administration dismantling a lot of our international relations that have been in place for a century, there's an alternative that we really hadn't been thinking about going into the beginning of the Trump administration. We have a lot of people that just don't want to deal with America. So, you know, we're not making it easy with the tariffs to do it, and there's now a full blown alternative to go the Chinese route. And that's a problem for national security on our end. It's a problem for national security with the people that will want to continue to be our allies. And that means that we are not as fully in charge of the reins of what we think is going to happen with artificial intelligence. So we need to be thinking about that when we do worry about things like civil liberties and where this is all going to head because China will definitely have their thumb on the lever with their use of their technology.

Cody Venzke: Yeah, I do appreciate Neil's observation that there is a great deal of continuity, but we've also seen sort of that continuity go through a upside down mirror, so to speak, where, for example, the antitrust enforcement at the FTC also now includes content cartels, and it's turned into a content moderation police force. And so it's sort of an interesting way to see some of this continuity that's been surprising also merge with longstanding grievances in the Trump administration.

Lilian Coral: Yeah. So let's—I definitely want to dig into both this point that Shane just made about, you know, international actors, as well as sort of the changes in the company dynamics. But before we get there, can we talk a little bit about the legislative branch, right? I mean, as the administration is taking, you know, hold of its agenda, Shane, do you see much— there's been much talk about how silent, you know, Congress has been in these first hundred days to some degree, and in particular from the Republican leadership to a lot of these different moves. What are you—how would you describe what we're seeing and what can we expect or what would you anticipate to see from Congress moving forward?

Shane Tews: I think one of the reasons why we haven't seen a major focus in Congress on artificial intelligence is because the budget numbers and the reconciliation situation—we have a huge tax bill coming up—they've been a major focus. So they've been really financially focused on what this administration wants to do. DOGE obviously comes into play on that. And so that means that that takes most of the oxygen in the room for when these people are in town. They have, though, had some very good hearings. And I have to say major kudos.

I think we're starting to see artificial intelligence start to creep into the hearings in a net positive in that I find them a lot easier to watch these days. As somebody who's, you know, watches a lot of full-on hearings, I feel like the members are showing up much more informed. The dialogue is actually much more interesting because a lot of times we feel like we're probably as informed as the people that are in the hearings. And I just think I was fortunate enough to sit through a demonstration where there were a couple—one particular AI company was showing Senate staff how they could be using this tool in the committees, and I think they're definitely using it, and I think we're all benefiting.

Lilian Coral: I mean, definitely the capacity within Congress to be able to actually understand and talk about a lot of these issues has dramatically shifted in the last eight years or so. So we're definitely seeing that. Cody, how would you characterize some of the congressional action so far? Or perhaps not action but some of the dialogue that's been happening within Congress?

Cody Venzke: I do think that it's absolutely true that we're seeing a lot of increasing sophistication within the legislative branch about technology. And that's both among the elected representatives and senators themselves as well as the staff that they choose to hire. And that's good for policy. There have been hearings, and they've been fairly constructive and largely picking up where the last Congress left off, focusing, I think, on kids' online safety and kids' privacy as well as thinking about artificial intelligence.

And artificial intelligence right now is perhaps in the early days of the legislative making. We saw the House of Representatives, for example, issue a pretty stellar report last year that sort of outlined the broad issues. Didn't have any really concrete barn burner recommendations in it, but it provided pretty thorough overview of the landscape. And that's a good place to start. But I think also overhanging all this, and I think we'll probably dive into this deeper in the conversation later, is the fact that many of the Trump administration's most controversial policies, especially those implemented through the Department of Governmental Efficiency, are very tech-driven and data-driven. And that means a lot of discussions that would be happening in Congress about technology, about artificial intelligence, and about privacy are sort of against that background of DOGE, and the controversy around that.

Lilian Coral: So, I mean, yeah, so given DOGE's efforts to cut back and yet at the same time, the president's insistence, if you will, that we're going to invest around $500 billion at least, right, in AI, like how are we—can we really expect Congress to meaningfully do something on AI in this session, like an investment in that—to that amount?

Neil Chilson: A part of the 500 billion, I think the announcement of Project Stargate was privately driven, right? A lot of that money was privately driven. I think there is an interest in Congress for additional funding for shared resources, for research resources in government. I think that's an uphill fight, but in this sort of spending environment. But if there is an area where there is a pretty bipartisan concern about keeping American leadership, I think it is in AI. We've seen a lot of the hearings. To me, the biggest shift from the hearings that happened right after ChatGPT was launched is that those hearings were basically in the frame of social media companies. They basically talked about AI as if it was social media. That is no longer the case. There is a lot more sophistication.

There's a little flavor of the content discussion sometimes, but I testified in three hearings in the last 20 days, and the conversations were almost all about like, how do we keep American leadership? How do we keep a competitive environment? A lot less about sort of the social media overhang that I think colored the early ones. And so I do think there is a chance for Congress to do something here. And I think there is some interest, especially with the rising—and we'll talk about this more—I think Congress is starting to realize there is, you know, a flow, an avalanche of, you know, state legislation in this space and that if Congress wants to get ahead of that, they're going to have to act soon. And so I think there is, there is some concern that if Congress doesn't do something, we're going to have a big patchwork that looks a lot like the privacy space. And there is an argument, I think that's, that's starting to resonate that that's—that could be a major barrier to keeping the US ahead of, you know, China or some, or other international competitors.

Cody Venzke: Yeah, Neil touches on a great point that we will see if Congress gets to this level of sophistication where they can really deeply parse out the different implementations of artificial intelligence. Artificial intelligence is a very broad umbrella term for a very different set of array of technologies. And we'll see if Congress can prioritize and dissect among AI in the defense space, AI and concerns around deepfakes versus AI that's used in decision-making, whether it be in the federal government or sectors like education and credit.

So that is sort of the fine tuning parsing that I don't know if we ever saw, for example, in discussions around social media.

Lilian Coral: Yeah, well, so that raises a good point about the lens by which we're making policy, because I think it would presume that there are these different vantage points to how we would, one, that AI is not a monolith unto itself, and that we would look at it from very different angles. And it does feel like what's actually taken root is much more of just a national security angle to every sort of AI-related decision. That would be my perception, and at New America, we've done some work and we'll continue to do a lot of work thinking about how this sort of techno nationalist lens really impacts tech governance, but kind of zooming up from that to the process itself. I mean, do you all think the policy making process is fundamentally shifting?

The other thing I would just sort of note is in the last few months, while there's always been influence in any administration and in any congressional session, there's always strong influence from the outside and from industry—it does feel like, and maybe this is the outsized role that Elon Musk is playing, it does feel like a new kind of governance model is taking root where, you know, one individual's money is not just buying access, but is probably buying a lot of structural influence to how decisions are being made, perhaps even how some technology-related contracts are being made. Do you feel like the field has kind of shifted and morphed? And if so, what's changed, what's at stake? Maybe we'll go around, you know, give everybody a chance to answer that.

Shane Tews: So one thing I think you've touched on a little bit is energy. And there was a really good hearing a couple weeks ago on energy, but it's also a reminder that we can't do this just at a national level. We have to engage the governors. We have to really talk about the priorities. And one thing that has come out really forward about this is our energy policy is from the 1970s. You know, the whole idea that we're taking out of mothballs the nuclear facilities and thinking about different ways to be managing the importance of the data centers that need to be in place so we can have all this artificial intelligence. They're gonna need energy, they're gonna need water. And so we need collaboration with groups that are beyond just the federal government and the state governments. We need to be thinking collaboratively about this. I think we're starting to see better discussions, but we can't just do this as an edict. It has to be done in a collaborative process, which is we're not really in a collaborative mood at the moment.

So we need to continue to work on that so this is something that really works for everybody.

Neil Chilson: I do think that one thing that's different in the advocacy side is that maybe more so than most administrations, there's one guy who calls the shots, right? Often it used to be, if you could reach the highest person in a department or something like that, if you could communicate with them, that you kind of knew like your concerns were going to be heard, your policy, things that might be addressed. I think that's just less certain now, right? Than maybe in the past. So, it's still early days, but that does seem to be, I've heard anecdotally, that that's basically how it goes. Like your best audience is going to be the president. Further down, it's gonna be harder to get really certain outcomes in your advocacy.

Cody Venzke: Yeah, and I think Neil's spot on. That that sort of restructuring of the executive branch and especially the authority of independent expert-driven agencies has been undermined, not only politically, given the sort of top-heavy nature of this administration but legally as well. We've seen courts, especially the Supreme Court over the past four years or so, really narrow agencies' abilities to make significant decisions on matters of political importance, to interpret their own statutes.

And that's going to curb their ability to take old existing laws, whether it be energy policy from the 1970s or telecommunications policy dating from the 1930s and apply them to technologies. So I think that's one place where we've seen substantial change. And the interesting result of all that is because Congress is slow to move, because the agencies can't move, we're seeing states increasingly pick up in this space, addressing topics as varied as deepfakes in elections and non-consensual intimate imagery to Colorado passing the first comprehensive artificial intelligence law. So I think states are sort of picking up the slack there in the way they did with privacy, although it remains to be seen if that sort of motivates Congress the way that Neil and Shane described.

Lilian Coral: Yeah, I mean, I think we've seen the shift to states in the last couple of sessions as well, but it does seem like at some point we will need some federal legislation on many of these issues, and we can't, you know, like how sustainable is it for us to continue to have in some ways a patchwork of legislation from varying states? So to me, it feels like—but I'd love to hear your thoughts on this—to me, just feels like this situation—the moment does call for what Shane described, which is greater collaboration.

Regardless of political party, we were in a moment that required a different kind of thinking for how we were going to support and sustain and lead on a lot of these new emerging technologies. I don't even know if we're going to get to quantum as a conversation point in this discussion. It does feel like we haven't had the ability to really collaborate with all of the various actors, whether it be at the state or in the private sector, to be able to think through what that vision or what those kinds of investments would require. And now when you have a system set up that’s so dependent on one executive, it just feels like it's gonna be just functionally really hard to develop that. It would strike me that it was just gonna be really functionally hard to develop a sense of what that vision is and then to think about what are the investments that the government is going to have to make in order to make those happen, let alone the fact that now we have an academic environment where a lot of our public R&D is being targeted. So I don't know—I feel a little less pessimistic, but maybe, but about, you know, just how the shift in the States can keep the ball moving. I don't know. I don't know if that's too pessimistic.

Cody Venzke: Yeah, the notion of a national vision around tech policy, think is an interesting one right now, where, for example, like tariffs was a key policy that the president campaigned on. It was a key policy coming in. We're hearing two different narratives of what the vision is for that key policy around tariffs. And tech policy, frankly, has not risen to the top of the administration, so I don't know if we're going to get a unified vision from the president on tech policy.

Neil Chilson: Yeah, there really are fault lines in the administration around tech policy. I think it's somewhat appropriate in some ways, right? Like tech policy, there really isn't a tech sector anymore. Not the same way that we used to think of basically big internet companies as the tech sector. As the internet has basically, and computers, and AI especially, has become an essential part of every commercial enterprise and government enterprise as well.

I think every company is a tech company, so I think the divisions are really going to play out as less like “tech company,” and how they're connecting with the administration. It'll be more like individual sectors of the economy and how they are advancing their interests in the administration. And that's not a unified vision. Tariffs is one example of that, but the shift on antitrust policy to one that's more driven maybe by “big is bad” and “small is good” ideas is that splits a lot of different political parties. So I do think that there are some interesting fault lines there, and it's not an obvious call for a strategy. I would say like the AI nationalism or the idea that AI in the US needs to be, you know, the world leader is still a pretty bipartisan feeling. To Shane's point, I think that was driven in large part by China's DeepSeek release in the near term, but I think that's going to be a long-term thing that the Trump administration is going to lean into. And I think they're going to find a lot of allies across the aisle on the idea of keeping the US prominent in AI and maybe that is a forcing function that drives some of that cooperation that you were talking about. One thing that's interesting about the sort of role of the president in all of this and the states is that we are shifting towards actors that are much more politically accountable in a way, right? Like they have to get elected. And so that does mean that rhetoric and their approaches are gonna be ones that appeal—that are popular messages? They're gonna aim at popular messages to their base or to their electorate anyways. That's gonna happen at the state level. That's gonna happen at the federal level, too. And that does mean that advocacy has to shift to accommodate that change.

Lilian Coral: Yeah, the whole conversation really needs to shift if we have to start to move towards kind of more popular based sort of decision making or policy choices in AI. I want to stay on this point about the fault lines. You mentioned sort of the fault lines within the administration. Obviously tech is not a monolith either. And there's fault lines that we've seen in the past that are definitely really visible, especially when we talk about things like tariffs, et cetera.

I don't know if either you, Neil, or Shane want to talk a little bit or just sort start off on what are sort of the different sort of fault lines within tech that we're seeing emerge in the, at least within the reaction to this first hundred day kind of policies? And what are some of the shifting alliances or political strategies we can expect to see from industry?

Shane Tews: One thing I was very concerned about towards the tail end, well actually most of the Biden administration, was how quickly they wanted to follow in the wake of the Europeans on their very strong European regulation of tech with rules that really didn't work for technology. I mean, they would say things and you'd try to explain why that would break things and they didn't care. I had this conversation last week with some lovely European people and I said, look, interoperability just puts you down to the lowest common denominator on security and,you know, look what we just went through with a bunch of very senior officials using Signal. Think of that, you know, if it was Signal plus, you know, a bunch of other things that are on there and you're on this very weak—you know, it would have taken much less than the senior person at the Atlantic to get that information out. And so, we're starting to see less of that in most of the areas, as Neil just pointed out, tech's not a monolith, there's multiple areas of it. Antitrust: that was an area where we're still seeing an alignment with the Europeans and what they call Big Tech. I'm just not a believer—I think Big Tech's a weird—t's a weird way of discussing something because so much of these things are the baseline of our economy. That's what people use. Now we're seeing a migration to the next iteration with chatbots. And we're going to see more of that as there's more agentic AI. So I feel like there's not only have I thought that Europe was going behind the power curve. Now we see our antitrust competition is way behind the power curve. I mean, the whole idea of breaking up ideas that came up 10 years ago as even relevant in today's market is a huge waste of our time, but they're continuing to do it. And there's other things you brought up, quantum, that we need to be thinking about from a national security perspective is once quantum comes, all bets are off. I mean, there's—we need to be thinking about that. That's where we needed to put our heads going forward is: what are we going to be doing about when this next huge change in the way technology works is going to mean that all the encryption that we have is probably going to get broken.

And it's enough that we have the UK breaking encryption right now, or attempting to break encryption, on currently Apple iclouds. But once they get into Apple's cloud, they're going to get into everybody's cloud. And once the UK has shown that there's a backdoor, we're going to be back into a Volt Typhoon, Salt Typhoon situation with all of our technology available to anybody who can figure out how to get that back door open. So there's a lot of things that we need to be thinking about for multiple layers. And it had never been a better time to have permissionless innovation that can think clearer and very direct about what's going on rather than trying to create a national strategy on this overall concept of tech.

Lilian Coral: Cody, do you have, curious to see what your thoughts are on any of this as well?

Cody Venzke: I think the observation that there is not a single monolith called just tech policy is exactly right. We're seeing technology of many iterations permeate throughout industries, and consequently, I think a more fine tuned approach is appropriate to addressing the way that again, like say artificial intelligence might be using insurance, in credit and education and various different use cases. There's perhaps ways to set rules of the road that can be done through non-regulatory guidance, perhaps through some regulations. I think OMB's guidance to federal agencies will also be sort of a good marker in the road for private uses of AI and what's expected and what's a best practice on that front. So there are a number of options available to set some common standards across industries and then dig into the way they might be used elsewhere. So I think that's one important instance because what we're seeing is a move away from the concept of 25 years ago, Big Tech was one particular industry, namely big internet providers to 15 years ago where it was social media to a place where many of those big technology companies are obviously still big, but they are now facing competition from different sectors like in the development of AI and elsewhere.

Lilian Coral: I'm curious though, from all of you, do we think the innovation ecosystem in the US is still strong? I mean, be curious to get your take on how easy is it still for companies to kind of start up and develop with a lot of really new ideas? Obviously, from a capital perspective, we have tons of capital being thrown in all kinds of ways, but is the environment the same as like 20 plus years ago when these now big tech companies started? One: your answer to that, but then two: Is there any danger to not continuing to support an environment where we can have these new ideas emerging. And it really does feel like a lot of the capital and the resources and the talent are being sucked by the larger companies. And then does this administration shift, you know, is there an opportunity within this administration at all to shift any of its policy so that we can continue to be that, you know, or we can continue to have that sense of American innovation that we've always had. It could be an illusion, but…

Neil Chilson: In the land of bits, like software development, yes, it's still easy to build in the US. It's amazingly easy. In fact, the AI ecosystem, I think just proves that. Open AI, only the nerdiest people, I should say, had really heard of them six, seven years ago. Now everybody knows of them. They had the fastest growing app ever.

Neil Chilson: And so that's a company that was well-capitalized because our VC market is much more professional, I think, professionalized than it was in the internet era. There's a lot of money. There's people who understand that taking bets can pay off in this space. That's a software environment. If you want to build things in the real world, we have real challenges in the US. I think if you want to build new energy supplies, whether they're clean or traditional energy, I think that's a real challenge in the US, and it's in part because of some of the state and federal barriers that we have in that space. But it's also true in housing, lots of other areas where technology could really be benefiting some of the most important things that we want, some of the most important human needs. But we have an environment that's much less dynamic than in the software space. And I think there are opportunities both in this administration but in future administrations of both parties to really make that make America able to build things in the real world again and have that tech expansion again, not just be about software, but be about some of the things that really matter to all of us: healthcare, housing, transportation.

Lilian Coral: Well, that's maybe a good point to kind of segue into kind of the role of civil society and many of our organizations. You know, obviously, this has been for us, at least I can speak at OTI, we spent a lot of the first hundred days really thinking about especially the impact of a lot of the data centralization and data access that's been happening and the impacts that that has on our civil liberties or the potential impact on our civil liberties. But as we move forward and settle within this administration, I do think there is a challenge for us to think about, how do we reclaim or what is the optimistic vision of technology? How are we going to almost ensure that America still has faith not just its institutions, but in our ability to use our technology in really good and positive ways. So that's how we're thinking about this moment, but I'm curious to hear about how you and your organizations are really tackling the shifts, whether it be politically or just a lot of the kind of just reorienting around how decisions are being made. And then, you know, what's kind of the most urgent call to action for civil society right now? It does feel like the kind of work that we all do, whether it's research, advocacy, writing, et cetera, all of that is, it requires kind of new muscles. We talked a little bit about engaging more popular kind of conversation and narratives. How are you feeling about the role of civil society and how are you all in your organizations really preparing for this beyond 100 days?

Cody Venzke: Yeah, my first reaction, partially in response to Neil's previous comment, is that for civil society, I think they have to have a “yes and” mindset. And that's the ability to preserve civil rights and civil liberties while thinking critically about what sort of safeguards make a lot of sense. You know, Neil pointed to examples in the real world of factories or developing energy that are potentially facing restrictive regulations. Well, those regulations, at least at one point, I think still do serve sort of as safeguards.

Do they need to be updated? Probably. And we're in the stage with AI where we are looking at the development of AI, looking back at past experiences with social media, with big internet providers, recognizing that opportunities for safeguards in those spaces were missed and harms were experienced. So as we think about, for example, DOGE and its data access, I think there's sort of a net reaction to want to protect civil rights and civil liberties from the potential that data access and data consolidation might lead to abusive surveillance, as well as abusing particular individuals that are identified through those massive data sets. On the flip side though, it's important that we also recognize their potential benefits from thoughtful data sharing at the federal level. And so using that as a specific use case, there's an importance for civil society to have that “yes and” mentality that we need to address the civil rights, civil liberties harms while also enabling efficiencies that can make people's lives better.

Shane Tews: Yeah, I have a blog that is out that is titled, “The Dangerous Road to Master File: While Linking Government Databases Are a Terrible Idea.: So I have definite concern about the idea that we put all of this information into one gigantic honeypot that is just available for, you know, whoever can decide gets access to it to break in and then know a whole lot of information that is just set up for, you know, tremendous amounts of challenging things for people, whoever they decide to go after with its government, if it's the legal system, if it's somebody that's doing a phishing attack, if it's done for criminal activity, it's just really not a great idea.

Neil Chilson: Yeah. And on the role of civil society here, I do think that, you know, civil society in many ways is in the idea game, right? The, the advancing ideas and we have a whole new ecosystem of idea production that's developing, in part because of generative AI. And, and I think that many of those tools can be used by civil society organizations and should be to allow…

I think this effect will happen in the commercial sector as well, but I think we should take those lessons. It's going to allow scale for companies that have far fewer humans involved. And so when I think about how civil society should engage in this space, I think we need to see a lot more small scale experimentation that dials up fast and goes away fast if it doesn't succeed and then scales when it does succeed. I think we can do that. I think it's not the natural inclination, I think of many of the ways that we've maybe done funding and institution building in the past, but I think it's possible, and I think there should be a lot more of that. And the Abundance Institute is trying to do things like that.

Lilian Coral: And do you feel like is there something we're not doing as civil society? I think I loved your frame, Cody, of the “yes and.” So is there an immediate reaction? I mean, is there something that we've been sort of failing to do? Again, there is enough continuity to some degree within these administrations, at least on sort of some of the basic regulatory spaces of tech, quote unquote. But is there something we haven't been doing as much of and we are, you know, we run the risk that if we keep neglecting as civil society that, you know, we're going to be worse off for?

Cody Venzke: I always think that long-term consequences is a big part of that “yes and” thinking. It can be very easy to engage in taking advantage of opportunities as they present themselves politically or reacting to concerns politically. And, you know, I think this is a good example of Salt Typhoon came up earlier in the conversation. I know the ACLU and many allied orgs raised concerns about a particular law that required backdoors in telecommunications that was passed back in the 1990s, saying this is going to be a major cybersecurity vulnerability. And for 30 years, we were sort of the tinfoil hat wearers on that particular point until it turned out that it was being exploited by geopolitical adversaries. So I think thinking through those long-term political consequences and international consequences, is a really key component when you're engaged, as Neil put it, in the idea of business, because that short-term political opportunity or that short-term political response can create systems that are going to exacerbate harm down the road.

Neil Chilson: Yeah. And this echoes some of what we were saying earlier, but especially the AI space has shown that maybe civil society has not thought too much about what a regulatory environment in the US looks like vis-a-vis other international spaces. I mean, often we've, a lot of the arguments have been that we should model Europe on some of this, but the experiment has played out. That's that has not worked out well for Europe on the growth of GDP per capita or any other number of measures on advancing human welfare through technology. On the other hand, we have China who does something very different in tech policy than the US does, obviously, and does not have many of the same sort civil rights concerns, but is rolling forward into the future very quickly. I think Civil society needs to think about those international implications as well. How does that play out? What might we be asking for in the US that might slow our economic growth? But also how can we do that in a way that, to Cody's point, preserves what makes the US great, which is not to imitate everything that China does. I don't think—that's not the road to growth. The US has grown and its tech sector has grown not because of top-down command and control style, CCP style, but because of the organic bottom-up market-driven innovation that we have and are the guardrails that we have in the constitution that keep the real focus on the things that really matter in human rights front and center on what the government can do. I think that keeping both of those in perspective is really important.

I think we’ve focused a lot on the sort of constitutional right side of it, and have focused a little less on the how do we compare internationally and the economic growth side. I think that's something worth thinking about.

Shane Tews: I would just add going back to privacy, my colleague Jim Harper thinks I've given in because I think we should have a national privacy bill. But that's because he says people should be able to choose things at a level. But the problem is people with social media, it's become a big marketing machine and you don't know at some point where the information flows. The key on that is compliance.

You know, we have seen a much bigger uptick in these companies spending money on compliance, which is only to make sure that the information doesn't go somewhere where they're going to get a huge fine for it showing up. Third party is huge. I think that we need to really focus on, you know, I think we're much more aware of how much information flow we generate as humans and where that goes. And we need to have an understanding into, you know, who's who has that data? How do I manage that data? Can I get that data back? Can I get it erased? Does it have a timeout factor? At least, probably, all four of us think about those things. It's socializing that so people think more importantly about the data they share. And then, you know, continue to create a way for that to be managed on either if it's a platform or, you know, wherever it goes as we continue to work through the challenges of having a national security bill. Because eventually it does become a security problem.

Lilian Coral: It's interesting when I thought about this conversation, I presumed that there would be more differences amongst us than not, but it sounds oftentimes like we actually, I don't know, at least in the conversation, it feels like there's more commonality. Yes, some of the details are very different in positions. What would you describe as maybe the big fault lines within civil society where perhaps there's an opportunity for us to like, you know, almost engage with each other more. I mean, Neil, your point about, you know, sort of the swing towards more sort of constitutional rights. I mean, that's very clear. There's been a big wave of civil society that's really focused on that. So there's that gap there. And then there's a, I would say, you know, another cadre of folks who focus more on the economic benefits. And so there's a middle ground there that we all need to come to. But are there other kind of major fault lines within civil society that we should probably spend some time convening around and trying to get to a healthy medium?

Neil Chilson: To me, the big one is, you could see this sort of in the reaction. And I don't know that it's like an idea, like an ideological fault line. Maybe it is, but it is sort of an attitudinal fault line, which is that there is a reflex, I think, in much of civil society to say like, hey, this is a new tech. It's in the commercial space. The commercial space is driving it. And therefore, let's focus on the potential harms, right? And you could see that in the Biden executive orders, very focused on mitigating risk and managing risk and less about, you know, taking this giant opportunity for this next sector that the U.S. has current leadership in. And so I think we need more discussions around that. Like, does it really advance human, like there are ways that technology can really drive a lot of the goals I think we all share about human flourishing. And what are the trade-offs that we're making by saying, “hey, we don't quite understand this tech, but let's approach it sort of from a risk framework rather than an opportunity framework?” And I think that's a sort of gives you a status quo bias. It's a sort of thing that makes it harder to improve certain things. Maybe it makes it safer. Maybe it doesn't. There's a lot of these technologies that could make life a lot safer. Automated driving is probably the biggest one the most concrete example, it can make life a lot safer, but, you know, we're focused on these other types of harms, and when we approach technology from that sort of risk-based approach first, rather than opportunity-based, I think we might be missing some opportunities to really achieve the goals that I think people want across the political spectrum.

Lilian Coral: As someone who spent some time looking at how we deployed autonomous vehicles in cities, I agree that that's a great example. There's a lot of, you know, steady benefit to autonomous driving. They are technically better drivers than we are, but at the same time, there's a lot of just hesitation. So let's talk kind of reorienting and starting to close out the discussion. What do you think the future of tech policy making should look like? I know we've talked a lot about some of the shifts that we're feeling. Technology is not magic. It is made by people and for people. To your point, Neil, it can harm, but it also just, there's a lot of opportunity. It just really depends on how we govern it. So what do we need to prioritize and what are you hoping gets really emphasized in the next three years, three and a half years.

Shane Tews: So the IMF has put out an AI preparedness index. And I think that is an area where we do need to have some really broad thought and focus is what I think in my head is the infinity loop. I work with one of my colleagues at AEI on workforce and development and then education. And the fact that we have very poor indicators to where AI is making changes from a national whole, actually, every time I talk to them, they say it's actually regional. They need better regional information to then understand how to then go back to the education system and change the curriculum so we're educating students into the next generation of our economy. And we're doing very poorly at this. I mean, I'm not somebody for central planning, but I think better information flow so you can make just better decisions, you know, both, small, medium, and large, you know, both companies and as individuals. You want everybody to succeed, but if there's only certain groups are taking the time and can understand what is the future, we're not sharing that information at a level where we can all be making better decisions about where we want to be spending our time, money, and energy going forward. So I think with artificial intelligence, that's the whole idea of thinking about, know, prospective, to Neil's point about being optimist, we need to put that into play into our technology policy.

Cody Venzke: Yeah, I think related to Shane's point is accessibility and access to technology. We don't want to see this tech revolution become limited to just those who have access to it. And ensuring that people have access to technology has been part of tech policy conversations for decades now, but it tends to come in fits and starts. And we are just now approaching thorough, deep broadband availability.

And I don't think we want to wait that long to ensure that people have access to artificial intelligence as it becomes sort of a staple in the economy.

Neil Chilson: I think we probably should stop talking about tech policy. I think that all policy has a tech component. I think what we will see is descriptively—I think what we will see is that these fights sweep in every single part of the economy and that people will be talking about—a general purpose technology like AI is going to have implications for every single regulatory sector.

And so policymakers are going to have to have a frame where they think about what is the role of technology and the transformation that it can bring, and how should government engage with that? So I think that advocacy organizations and civil society needs to think hard about how do they reach people who don't have all of the back history of all the fights that we've had in the tech policy space for the past 20, 30, 40 years.

How can we reach and talk to them about the benefits and the necessary guardrails that come in all of these different sectors as tech invades all of them?

Lilian Coral: That's a great point. Yeah, as we say, new tech, same issue. So that's a really great way to end. Thank you all for this timely and thoughtful conversation. I really hope that our listeners come away with a little clearer sense of the stakes, what's sort of energizing at the moment, and then some of the ways in which we all hope at least that technology continues to serve democracy.

Stephen Darling: Thank you for listening to Democracy Deciphered. Our host and executive producer is Shannon Lynch. Our producers are David Lanham, Carly Anderson, Joel Rienstra, Trent Cokley, and Joe Wilkes. Social media by Maika Moulite. Visuals by Alex Briñas. Media outreach by Heidi Lewis. Please rate, review, and subscribe to Democracy Deciphered wherever you like to listen.