Public Interest Technology
Table of Contents
- Introduction
- Mapping the public interest technology landscape
- The market vs. democracy
- What will it take to achieve truly data-driven policy?
- What makes a public interest venture?
- The politics behind bad user experience
- Rhode Island’s unconventional approach to foster care
- The most empowering tool for hurricane recovery
- We need to empower a new generation of technologists who want to work for the public good
- Rethinking teaching and learning with open educational resources
- Designing for health
- Creating awareness and action through mapping
- Financial Inclusion & Citizen Participation Project: Bridging the data gap for low-income communities
- Fighting for civil rights in the age of technological innovation
- What we mean when we talk about civic tech
Acknowledgments
Establishing the Public Interest Technology program over the past year was an effort that could not have been completed without the tireless support of so many people. Anne-Marie Slaughter has been instrumental in our success, as have our generous funders including the Ford Foundation, Todd Park, Reid Hoffman and the Aphorism Foundation, and the Rural Community Assistance Partnership. We would also like to thank the nonprofits and university partners who have joined our effort in establishing and expanding a field of public interest technology that is diverse and accessible to all.
Downloads
Introduction
We launched the Public Interest Technology project at New America with a fellowship program on the theory that it would be worth investing at least a year experimenting with a variety of approaches, testing some theories about what it would take to build a field of public interest technology, and developing work that could generate enough momentum and inspiration to set us on a clear course for the longer term. We selected 16 fellows from a pool of fascinating project proposals; they then implemented 13 projects in eight locations across the United States. We focused our work on three central missions: field building, policy implementation, and policy development, which gave our fellows room to develop a broad array of projects, from foster care to criminal justice, immigration to social venture funding models.
We learned a tremendous amount from our field building effort, which began by connecting directly with people engaged in the loosely defined work of public interest technology to determine their appetite for being part of a field, and beginning to define what that field might consist of. Fellows Hana Schank and Sara Hudson conducted extensive research on the public interest technology landscape, and launched our own publication, The Commons, which amplifies stories of innovation in and around government. Andreen Soley and Alan Davidson have been building a network of universities and colleges tasked with defining PIT as an academic field, which has convened in person and by phone to dig into the challenge of building a better career pipeline into public service for students in a variety of fields. We look forward to expanding this work in our second year as we grow the audience for The Commons and expand our work with universities.
Our policy implementation work took on a variety of major challenges across the country. Fellow Marina Martin, working with the Rhode Island Department of Children, Youth, and Families, helped redesign the process for new foster care parent onboarding, nearly doubling the number of eligible families in the state in a single weekend. Fellow Raph Majma worked with the immigration nonprofit CASA to explore how nonprofits might integrate technology into their work for increased efficiency and an enhanced ability to serve diverse populations. We also ran several discovery sprints—short, intense periods of problem investigation and solution sourcing—on rural nonprofits’ use of technology, on the needs in the immigration field, and most recently, on technological possibilities for family reunification during the migrant border crisis.
Finally, our policy development work created important connections with a policy community by co-hosting a 2018 Farm Bill Rural Development Innovation Summit, with the Rural Community Assistance Partnership and the National Cooperative Business Association. The summit gathered rural nonprofit leaders, community stakeholders, policymakers, and technologists to explore how we might best support rural America’s entrepreneurial spirit in a digital age, and bring tech thinking to a policy development process. Some of the ideas generated in the summit found their way into the Farm Bill. We hope to build on what we learned in this process to continue developing this type of “user-informed policymaking” that deploys tools and best practices from the tech sector to engage the people affected by policymaking into the policy design.
As our first class of fellows moves on to their next ventures, we are incredibly grateful for the scope, ambition, and quality of their work. For example, Dipayan Ghosh conducted well-received research on digital misinformation and algorithmic bias; Andrew Lovett-Barron explored funding models for nonprofits and social interest ventures; Sonia Sarkar worked with the health community in Baltimore to make healthcare more responsive to patients’ social needs; Michelle Thompson mapped financial inclusion across four low-income communities in the United States; Denice Ross analyzed networks in communities across the country; and Clarence Wardell worked with the Leadership Conference on Civil and Human Rights on establishing technical capacity within nonprofit advocacy organizations.
We look forward to expanding this work in our second year as we grow the audience for The Commons and expand our work with universities
Several of our fellows will continue their efforts as we advance into our second year. Hana Schank will continue leading The Commons and establish a portfolio around government procurement resources; Marina Martin is working with several states on bringing technology to child welfare policy; Kristina Ishmael will continue to work with school districts on adoption and use of open educational resources; Raph Majma is leading a research sprint exploring the barriers to naturalization for legal permanent residents; Jeremiah Lindemann is continuing to expand the network of local governments participating in his Opioid Mapping Initiative; and Lauren Greenawalt will push forward with her work with Los Angeles County on using data to build an effective program to divert young people away from the criminal justice system.
We are also delighted to welcome new faces into the PIT team. Afua Bruce and Emma Coleman joined us midway through our first year as the director of engineering and senior communications manager, respectively. Maria Filippelli took on a fellowship leading our Census 2020 work in partnership with the Leadership Conference on Civil and Human Rights; Sabrina Fonseca and Lindsey Wagner are supporting our immigration sprint; and Eli Pariser, founder of Moveon.org and Upworthy has joined our team as a fellow exploring how platforms can support thriving democracies.
We have learned a tremendous amount in a single year, and what we have learned has further convinced us that we are onto something. The tools and methods that governments, advocates, and NGOs use to make and implement policy are incomplete, which deeply undercuts the results. When we put technologists at the table with policymakers, we can accomplish extraordinary things.
Cecilia Muñoz, Vice President of Public Interest Technology
Vivian Graubard, Director of Strategy
Afua Bruce, Director of Engineering
Emma Coleman, Senior Communications Manager
Mapping the public interest technology landscape
By Sara Hudson
This article was originally published in the New America Weekly.
What do we talk about when we talk about “public interest technology”? Ask 14 people at, say, a think tank (hey, New America), and you’ll get 14 different definitions. Oh, plus an invitation to attend recurring conclaves to ponder this very question. (We are, after all, an organization that values research.)
Ask 14 people outside of a think tank, and you’ll get, perhaps, two definitions, nine rejections of that phrase for insert preferred pet terminology, eight people who say, “I don’t know what to call it but here is what I do,” and 14 people who say, “Umm… well, I do stuff with technology, but I wouldn’t call myself a technologist.”
(More or less. My math might be a bit off.)
So how, exactly, do we define “public interest technology”? In Facebook terms: Baby, it’s complicated. Depending on context and conversation, the phrase can refer to a field, a profession, a methodology, a solution, or an aspiration. In turn, each of those has its own definition.
Public interest technology, the field, is a space funders and foundations want to bring into being, but one still in the process of making itself. Think a ghost gliding around Hogwarts. There’s a corpse there. But it’s fuzzy around the edges.
Depending on context and conversation, public interest technology can refer to a field, a profession, a methodology, a solution, or an aspiration.
Instead, they frequently explain the field with an analogy to something that does already exist. That analogy goes something like this. In the 1970s, civil rights and anti-poverty movements led to the creation of the field of “public interest law.” Imagine you are a civic-minded young thing and you attend law school. Badda bing, badda boom, you now can have a lifetime career serving the public good. Not as a side hustle. Not only pro bono. Not relegated to volunteer work or the weekend. As your full-time, paying job.
Fast-forward to the 2010s. Run that play again—but this time, swap “technology” for “law.” Build a field where people can build lifelong careers deploying technical skills and solutions for social good.
Welcome to the world, public interest technology, the field.
The natural next definition, then, is public interest technology, the profession. This includes people working in and around government, nonprofit, NGO, university, public sector, and social services spaces. Fun fact: The profession has exploded in the last decade. Second fun fact: Almost none of the people in it use the term “public interest technologist” to describe what they do.
Most people find their way to the profession one of two ways. Many train for jobs in public interest spaces—but once they get there, realize that twenty-first century social problems require solutions and skills that incorporate twenty-first century tools. They start learning, adopting, and advocating for technical and technology approaches and practices across public interest sectors.
Others train to work in technology spaces—but then realize the potential for twenty-first century tools to solve twenty-first century social problems. Increasingly, people who trained for jobs like engineers, designers, programmers, and product managers have responded by entering public interest sector spaces. They help bring best practices from private industry to serve public good.
It’s worth noting that people who fall into the profession of public interest technology travel under many aliases. Community technologist. Civil servant. Designer. Entrepreneur. Digital expert. Hustler. Community advocate. Data-lover. Policy nerd. Problem solver. Superhero. User of technology but not a technologist. Plus dozens of others, including but not limited to project manager, librarian, “the guy/gal you call when you need to fix something,” web manager, hacker, engineer, developer, social worker, community outreach coordinator, comms person, university researcher, chief innovation officer, policy expert, and founder.
You won’t, however, find them listed as “the person who will fix the printer.” Tech support, skilled repair, and help desks are invaluable to public interest spaces. But that isn’t public interest technology, the profession. Think strategic problem solving, not printer repair solution.
That takes us to public interest technology, the methodology. You’ll be unsurprised to hear this one also has a few definitions. Sometimes, it means the integration of technology and technologists at the primordial moment policy folk and practitioners sit down to solve a problem. Sometimes, it means the deployment of technology, technical business practices, and technologists across public interest spaces to solve problems at scale, with lower costs, in more nimble fashion than manual, traditional, or waterfall processes. Almost all of the time, it incorporates private sector tools and technologies to reduce time, money, labor, and equity required while increasing service(s) to public good and people.
This leads to public interest technology, the solution. Far too often, antiquated tools and business practices cause inefficiencies, inequities, and injustices in nonprofit, social good, government, and nongovernment organizations. Public interest technology the methodology uses modern and private industry practices and tools to address these kinds of problems. These include user research, human-centered design, agile work processes, open data, clean data, transparent data, use of data at all, constant beta, artificial intelligence, upgrade from manual/written processes to automated/digital processes, and A/B testing.
Before concluding, I’d be remiss not to pause here and note that if you capitalize the P-I-T and add “program,” you get New America’s Public Interest Technology program. It launched this year with three simple goals: do good, build partnerships, and forge career paths. Oh, plus, improve services to vulnerable communities and strengthen local organizations that serve them. Oh, and also maybe square away this whole definitions thing?
Okay. Maybe not so simple after all?
Maybe not. But we need the work of places like New America and so many other organizations and individuals across the country working in this space. Because, in its final definition and at its heart, public interest technology is an aspiration. It’s the hope that one day, the norm, not the exception, will be twenty-first century technology and tools integrated horizontally, vertically, and daily into solving twenty-first century problems faced by the public. It’s the move toward measurable, sustainable, long-lasting impact and equal access to modern solutions to improve modern daily life. It’s a small phrase for big dreams.
It’s a world where an eight-year-old girl can say, “When I grow up, I’m going to be an engineer because that means I can make my neighborhood a better place.” It’s a revolution in how we address social problems. And it’s a commitment to ensuring that everyone, everywhere, gets to benefit from the latest and greatest tools and solutions humans make and have to offer humanity.
The market vs. democracy
By Dipayan Ghosh
This article originally appeared in Future Tense, a collaboration among Arizona State University, New America, and Slate.
If you spend enough time browsing social media, there is a chance you saw an intriguing story shared and re-shared in recent days about how agents of NATO—a long-standing strategic alliance between the United States, Canada, the United Kingdom, and most of continental Europe west of Kharkiv, Ukraine—sprayed chemicals over Poland to damage the well-being of the local population. The original Polish-language account has been spread far and wide with great certitude. Given you are reading this Slate piece about internet-based disinformation, you may already suspect the truth: The Poland story is entirely fake. But would you have been so skeptical if you had seen it shared on social media by the people you trust most?
In recent days, researchers have shown that agents of the Russian government have pushed the Poland story—an example of pure disinformation in its most egregious form—on the most visible social media platforms. And though the long-standing chemtrails controversy has been verifiably (and repeatedly) debunked, many social media users continue to believe it, making them particularly vulnerable to the false story about chemicals sprayed on an unwitting population. We know that these sorts of conspiracy theories do not necessarily recede with time. Instead, they are often so intelligibly and inflammatorily recounted that they continue to spread, affecting susceptible readers who might not question their veracity or the motivations of their propagators.
These stories aren’t harmless. Consider the foreign policy implications if large numbers of people in Poland (and other countries that sit squarely between the spheres of Western and Russian power) believed this conspiracy theory. Needless to say, scalable belief in the concocted account of a NATO plot to poison swaths of Eastern Europe is hugely beneficial to Russia, which for years has sought to taint the Western allegiance’s image and thereby undermine its mission.
The purveyors of disinformation have clearly determined that large-scale social media platforms offer a tremendous opportunity to move people to believe their messaging. Key to their tremendous ongoing success is their use of the audience segmentation tools developed by the leading internet advertising platforms. Using these technologies, disinformation operators can target demographic groups that are homogenous across certain set of characteristics—for instance, groups of strongly liberal marginalized teenagers who live in large American cities and who take an interest in reading about the events that took place last year in Charlottesville, Virginia—with great precision and accuracy.
The digital advertising ecosystem implicitly aligns the interests of the internet platforms like Google and Facebook with those of advertising clients themselves.
As my co-author, Ben Scott, and I describe in a recent report on the ways that disinformation operators leverage web technologies, herein lies the fundamental flaw in the market logic underlying the largest internet platforms. (Disclosure: Scott and I are affiliated with New America; New America is a partner with Slate and Arizona State University in Future Tense.) The digital advertising ecosystem has, over the past 15 years, solidly established itself as the de facto economic backbone of the commercial internet. It implicitly aligns the interests of leading internet platforms that own and operate the world’s largest advertising markets with those of advertising clients themselves, whether they are consumer-facing retail companies trying to sell shoes or foreign actors with nefarious intent. That makes tackling the disinformation problem all the more difficult.
To date, the public knows little about what, exactly, the big tech companies are doing to identify and work against the efforts of propagandists. The industry has proposed measures promoting greater transparency, and that is fair and well. Requiring that ads disclose who paid for them could help researchers and journalists, particularly if the advertisers have political motives for their ad campaigns. But I fear that transparency will do very little to limit the effects of disinformation operations.
A more thorough solution must begin with the segregation of the interests of the disinformation agent and the internet platform. In the short term, internet companies might decide to limit the activities in which known disinformation agents can engage on their platforms. On Wednesday, for instance, Twitter announced a change to how embedded tweets display on other websites; April Glaser writes on Slate that this may help fight the bot problem by representing the relative popularity of shared content more accurately.
Further down the line, the industry might begin to try solving this problem at scale by developing advanced algorithmic technologies such as artificial intelligence that is able to detect and flag or proactively act against suspected attempts to promulgate disinformation. For example, Facebook, a company that I have worked for, has already begun taking steps to automatically detect and remove fake accounts and interactions from the platform and says it deleted tens of thousands of fake accounts in Germany before the country’s 2017 federal elections.
Such near-term efforts around transparency and the automated detection of policy-violating content will help. But these types of solutions will likely do little to limit the threat presented by nefarious disinformation operators who closely monitor these changes and constantly devise strategies to work around them. The industry must act upon this and work with government and civil society to counter and eradicate the deep-rooted societal harms wrought through long-term behavioral data collection, digital advertising audience segmentation, and targeted dissemination of sponsored and organic content. Accordingly, regulators around the world are already vociferous about the dangers that they believe leading internet platforms pose to society. They rightly point to the fact that we need comprehensive privacy and competition policy reforms to limit the impact of disinformation and other broad concerns surfaced by the leading internet platforms in recent years. What exactly should these reforms look like? I wish I knew, but for now I don’t. These are thorny problems. But acknowledging the fundamental alignment between the goals of the platforms and the disinformation purveyors is the right place to start this inquiry. If we pretend that the digital advertising industry’s business model has nothing to do with the ease with which bad actors can plant false stories, then we are missing something critical.
We have entered a new age defined by the digital technologies we have come to adore. Where the television and telephone once dominated, over-the-top video and social media now pervade. But with these changes come a new set of challenges to keep our society safe and equitable. That means prioritizing our democracy over the market.
What will it take to achieve truly data-driven policy?
By Lauren Greenawalt
This article was originally published in the New America Weekly.
As the thinking goes, you can’t manage what you can’t measure.
In 2016, the National Academies of Sciences, Engineering, and Medicine was charged with examining the impact of permanent supportive housing programs on health and healthcare costs. In a report released earlier this month, the Academies noted that while this sort of housing likely improves health, there was “no substantial evidence” to prove it. Why? The group concluded “less than it had expected would be possible when embarking on this work” largely due to the absence of relevant data, which either hadn’t been collected or was otherwise unavailable.
And yet, the Academies’ data dilemma isn’t unique. We, as a society, often invest significant resources into ambitious public policies. But despite the time and money we spend doing this, we struggle to determine whether these policies have successfully met their goals. In no small part, that’s because we typically lack monitoring and evaluation mechanisms that can help us decide whether policies are really effective. As a result, failing policies may be left in place, rather than tweaked to reach their intended outcomes. Or, as in the example above, policies that are bringing value may be unable to demonstrate it—and they may then be vulnerable to funding cuts.
Either way, the public loses: Policymakers miss an opportunity to advance good policy, taxpayers don’t see a return on their investment, and those whom a policy is intended to help aren’t served. In other words, it’s hard to make good policy when we don’t know how, why, or when policies are doing what they’re supposed to do. So, how do we achieve truly data-driven policy?
While many policymakers show a growing appetite for evidence-based or data-driven policy, attempts to evaluate policies are often quashed by the very same barriers that beleaguered the National Academies: a lack of data. It may seem somewhat strange, in today’s world of heavy information collection, that we don’t have the data necessary to appraise policy. But that’s at least partly because we often design data-collection forms and processes with operations, rather than evaluation, in mind. As a result, sometimes, the data needed for evaluation isn’t collected at all. And at other times, it’s collected in a way not accessible to researchers.
If policymakers want to reap the benefits of data-centric policy, they must prioritize data and evaluation from the outset. This means deciding what must be measured, and then determining how it will be collected, made available to the public, and analyzed. Luckily, a number of groups are making all this less abstract, as they lay the foundations for more data-driven policy.
Policies that are bringing value may be unable to demonstrate it—and they may then be vulnerable to funding cuts.
Take, for instance, New York City’s Criminal Justice Reform Act (CJRA). The New York City Council designed the CJRA, which allows low-level misdemeanors like violating park rules or drinking from an open container to be diverted from criminal justice system to a civil court, with future evaluation in mind. Crucially, the law’s authors pointed out the CJRA’s potential to reduce racial and geographic disparities among low-level offenses, and passed legislation to ensure that progress toward this goal could be measured. One bill in the CJRA, in particular, requires the city’s police department to publicly report, each quarter, counts of criminal and civil summonses issued by offense, race, and geography, among other factors. Data-wise, the CJRA’s success lies in the trifecta of making clear what needs to be measured, compelling police to collect relevant data, and creating mechanisms to share results. The quarterly reports, along with a larger policy evaluation, will in turn help the council—and the public—see if the CJRA is meeting its goals, and it can inform future policy discussions on how to improve or expand the policy.
So, the CJRA offers an example of how policymakers can lay the groundwork to measure the success of a new policy. But what about when a policy is already in place?
In this instance, policymakers can work retroactively to outline metrics, collect data, and promote analysis of existing policies. Let’s look at California. There, county and state leadership have noted shortcomings in evaluating the impact of CalWORKS, a major cash assistance program funded through federal dollars. According to these stakeholders, data practices comply with federal reporting regulations—but fail to measure whether the program is achieving its goals of improving the lives of recipients. To address this, the California legislature passed legislation mandating a new performance management system for the CalWORKS. As a result, stakeholders will now outline what should be measured to track CalWORK’s success, and in turn they’ll ensure that important, relevant data is collected and scrutinized.
A working group is currently crafting metrics to do just that. By 2019, counties in California will be required to track related data, as well as provide annual progress reports on these metrics. On top of that, every three years, counties will have to conduct their own self-assessments, based on the data, as well as develop an improvement plan to fine-tune these indicators. While it’s most efficient to establish metrics and collect data from the very beginning, California’s efforts to re-evaluate a major welfare program show that it’s never too late to improve.
This isn’t to suggest that we ought to treat data as if it’s infallible. The opportunity for additional governments to follow the highlighted examples is tremendous, but it’s also key to recognize the limitations, even risks, of data analysis. In their report to the legislature, California’s Legislative Analysis Office echoed the potential for better performance management to improve the state’s welfare program, but noted various challenges presented in the analysis and interpretation of data on policy outcomes, such as the risk of over-attributing positive outcomes to a policy when other factors may have played a role. Existing government initiatives provide some ideas of what this sort of forward-thinking precaution can look like. The United Kingdom’s Justice Lab, which evaluates government and non-profit programs, publishes plain language explanations of their analyses that spell out what conclusions can—and can’t—be drawn from the analysis.
Policymakers must also work to protect against potential harms of data analysis. A growing body of research shows that, without careful consideration, this sort of collection can work against the people policymakers intend to help. Cathy O’Neil has noted that while the public often reveres math and statistics as objective, analysis usually still reflects intentional or unintentional biases. In a similar vein, Virginia Eubanks has found that large troves of government data can create algorithms that surveil and punish citizens, especially vulnerable ones. Some policymakers have already started taking steps to ward off the potential dangers of data. For instance, New Zealand’s chief government data steward recently released a set of principles to guide the government’s data collection and use in order to mitigate the potentially negative consequences of data analysis. These principles enshrine a commitment to protect personal information data used in analysis, and to monitor and address potential bias in analysis.
As people from city council members to state legislators continue to prioritize evaluation in their approaches to policy, reports like the Academies’ ought to become relics of the past. Collecting good data is difficult, yes, but that shouldn’t stop us from measuring our policies so that we can unearth best practices—good policy and people’s livelihoods depend on it.
What makes a public interest venture?
By Andrew Lovett-Barron
Originally published on the Public Interest Technology blog.
I’m always curious about what is happening behind the wall and underneath the streets. The infrastructure (human, informational, mechanical) that constructs our lives is something that we often engage with in superficial ways. When I left the U.S. Digital Service in early 2017 to join the first cohort of the New America Public Interest Technology fellowship, I was given the opportunity to learn a bit about the business models that underpin our public interest technology landscape. Specifically, I wanted to meet the teams building the technology that is increasingly being recruited to advance the public interest.
This is an emerging field.
Public interest technology is both emerging and well established in 2018. Chat with anyone of us who had spent some time in the U.S. Digital Service, or doing technology in civil society (like, say, the incredible digital teams at Planned Parenthood), and you’ll hear stories dating back years about how technology best practices have been leveraged for good.
At the same time, there is not a defined field yet. New graduates from engineering, design, and business programs struggle to understand where their skills fit in a landscape of fundraising, direct service programs, and legal advocacy. And even when these opportunities are discovered, it is rarely with the competitive salaries made necessary by the crippling educational debt associated with top programs in the United States. To create a vibrant public interest tech sector, public interest technology ventures HAVE to be able to create appropriate incentives to draw more people with the right skills to this work.
To create these incentives, we need to understand what is happening “behind the walls” of public interest tech. How are the pipes that route capital between organizations configured? How does value get transmitted between donors, customers, beneficiaries, and contributors? Where are the blockages that might lead, ultimately, to a collapse or a failure in the system as a whole?
Who is doing public interest tech now?
In my interviews with different pubic interest tech entrepreneurs and funders, I spoke a lot about the sustainability of these ventures. In this case, sustainability means keeping the business that provides the service or product alive with its employees paid and servers running.
Jimmy Chen told me about what it is to create and run the for-profit, mission-driven software company Propel, which builds better experience and technology for food stamp recipients.
Rohan Pavuluri shared his experience co-founding the nonprofit legal aid platform Upsolve, which helps low income Americans reclaim their financial independence through easier engagement with the Chapter 7 Bankruptcy systems.
Matt Grasser and Chris Guess told me a bit about their journey co-founding and bootstrapping the open-source crisis management system LDLN after their experience responding to Hurricane Sandy and volunteering with Typhoon Haiyan.
And I spoke with an incredible mentor and funder who helped develop Jimmy and Rohan’s work, Hannah Calhoon, about her experience creating the Blue Ridge Lab accelerator program, and how she is iterating on the “how” of bringing public interest technology ventures to life in sustainable ways.
What I learned from public interest technologists
From these interviews and dozens of others which didn’t make their way into their own write-ups, there are a few broad themes that I’ve noticed and that I want to share with the budding (and established) social entrepreneur.
1) Don’t choose your corporate structure too quickly
For mission-driven ventures, there is frequently more selection than traditional ventures for what kind of tax structure your company will adopt. 501(c)(3) tax exempt status comes with advantages like being able to more easily pursue vital grants from public entities and philanthropic organizations, but it also doesn’t hold the concept of equity for shareholders, meaning that pursuing venture capital and the traditional funding path open to for-profit entities is unavailable.
That said, the for-profit entity comes with its own host of obstacles, including seeing your social mission undermined by profit motives and fiscal responsibility to your shareholders. I touch on some of this in my interview with Propel. Sometimes though, the non-profit route is the best choice,as was the case for Rohan and Upsolve There is no blanket solution, and as you develop your product and understanding of the problem, carefully consider what KIND of organization you’re forming.
2) Failing fast involves a lot of risk for the people you are trying to help
Building a software product involves a lot of failure. You are trying to create something with the right technical underpinning, that works for the user community you’re trying to help, that is arriving at the right time in the right place on the right medium, and that has a business model that can pay for the engineers and designers and product managers toiling away at the product. To do this, you iterate on different parts of the product quickly, which often means you rapidly shift directions: shutting down what isn’t working and growing the parts that are.
But what if—in the process of building your software—a community comes to rely on part of your software that isn’t working? What if there’s no business model for providing discrete and free access to consulting for sex workers, for example, but that at-risk community has organically come to rely on your suddenly unprofitable tool.
We have to critically reconsider the tools of software development that we’ve come to rely on if we are going to co-create with communities without doing harm. And as investors and funders, we can take a page from Blue Ridge Lab’s playbook, and reconsider how mismanaged (or even unlucky) growth can undermine the services that smaller populations are already receiving. In other words, we can fail fast, but be careful not to break things.
3) The philanthropic sector needs more and better exits for social entrepreneurs
For-profit software companies have a wealth of different business models and pathways to success in front of them. For each of these business models, there are often defined mechanisms for support, growth, and liquidation that exist to support the people engaged in those ecosystems. It means that an entrepreneur can start her business, find appropriate sources of capital along the way to grow depending on where her business is, and exit the business via a plethora of different merger and acquisition models. But in the nonprofit sector, many of these same models don’t exist. Because equity isn’t captured by the 501(c)(3) model, the idea of a buyout or acquisition doesn’t really exist. Rather, you might see a merger of organizations based on mission similarity, but more often than not, a merger is perceived as a failure in the nonprofit sector. This needs to change, and we need to celebrate the work social entrepreneurs have put into building their organizations. Dahna Goldstein and I explored this in an article for the Chronicles of Philanthropy, which we would encourage you to read for some ideas on how we can start to change.
4) The funding ecosystem has not caught up to software products
Sometimes, you don’t get funding because your idea simply isn’t developed enough. Maybe it’s off the mark. Maybe you don’t have the right team. But sometimes, it’s because the money hasn’t caught up to the needs that social enterprises are signaling. Many of the social enterprises I interviewed were struggling to find capital. It is relatively easy to find funding for the early days. Your team might be subsisting while you put everything towards building the product. But then the next stage of capital isn’t there. Philanthropic capital works in very different ways from traditional venture capital, and so in many cases both funders and entrepreneurs can fall dramatically short of meeting each other's needs. Many organizations like Fast Forward, Blue Ridge Labs, Omidyar, and others are gracefully experimenting with different funding models, from impact investment to fellowships and different approaches to grantmaking. So as a social entrepreneur, it makes a LOT of sense to consider your business model as something that must be bootstrapped like LDLN did, that might need to grow slowly via earned revenue, and that might have to survive long enough for the industry to catch up.
What’s next?
As my year with the Public Interest Technology Fellowship comes to a close, I’ve walked away with a desire. The program presented a great opportunity to build and explore what public interest tech means, how it works, and why it works like it does.
So what’s next? New America’s Public Interest Technology initiative is continuing to grow and evolve. I captured a bunch of my writing on this blog, and will continue to support an open source project I developed called the Grant Calculator. For more of my work and the work of other fellows, I’d encourage you to sign up for the mailing list and apply for open fellowships in the future if what you see interests you!
As for me specifically, since starting the fellowship I moved from Washington, D.C. to Copenhagen in Denmark, and am working on my own venture in public interest tech with a more international lens. You can keep in touch via Linkedin, Twitter, or email.
The politics behind bad user experience
By Raph Majma
Originally published in the Public Interest Technology blog.
We’ve all seen it: a long line of visibly upset people who are wondering how much longer they’ll have to be there. While they stand there, there’s a form that has to be filled out. The person who handed them the clipboard knows that this piece of paper will travel a long way to its final destination—a box somewhere, locked away and only to be seen again if there’s some problem in accounting.
This probably describes every doctor’s office you’ve ever visited, and every trip to the DMV. But what about when the stakes are higher? What happens when an immigrant doesn’t have hours to wait to speak to a volunteer attorney? Bad user experience represents more than an inconvenience—it can be harmful.
Unfortunately, bad user experiences like this are often the norm, because people who confer benefits don’t often have the time to consider what the whole process looks like from the applicant’s eyes. Their focus, understandably, is squarely on what information they need to collect. When I worked in government, I saw firsthand just how routine difficult processes like these are for applicants, and now, I have found how pervasive bad process design is in the nonprofit community as well.
The pain of forms is universal across government and nonprofit service providers. Forms are written in confusing legal language, require an excessive amount of sensitive information, and can be extremely repetitive. Of course, gathering that information is crucial—it can show whether or not an applicant qualifies for a particular benefit. But these forms often require information from individuals that the government agency or nonprofit already has, creating a time consuming and unnecessary hassle for both the applicant and the provider.
The examples of bad forms creating negative user experiences are endless—from veterans struggling to get healthcare and college benefits to the long wait times for legal permanent residents trying to naturalize. In government, there are few incentives to actually improve the application experience, because if everyone who qualified and applied for benefits immediately received them, the government couldn’t afford it. Paper is slower, and it allows agencies to keep up with and anticipate demand.
Even if everyone agrees to improve a process, designing a user-centric form for benefit applications is harder than it seems in both the government and nonprofit space. In a vacuum, it’s easy to approach a blank whiteboard and dream up a better experience. But when you have to start adding in bits of reality, like database integration or changing physical workstations or breaking a reliance on printers, the redesign can feel like an insurmountable challenge.
What happens when an immigrant doesn’t have hours to wait to speak to a volunteer attorney? Bad user experience represents more than an inconvenience—it can be harmful.
I’ve spent the past several months working with a nonprofit that assists low-income immigrants, helping them replace their paper information gathering process with a series of digital tools. We’re tackling their membership application first—because that seemed like the easiest, most lightweight thing to replace. But even starting small, the process has been a struggle.
We’ve hit two major snags that are relatively common among any agency that tries to redesign a process. Primarily, there is a lack of resources, in all forms of the word. In order to redesign the applicant experience, you need time, people, and money, three things often in short supply. The time is for devising a better process, the people are for owning this process and ensuring that it doesn’t break down, and the money is for hiring a technologist or purchasing technology, when necessary. If even one of these isn’t available, the redesign fails. Unfortunately, service providers often have a desire to do better, but process improvement is pitted against helping those directly in front of them right now. They could change how they do things, but that means they’d have to turn people away. The reality they face is impossible.
If a new process is successfully designed, the second snag still lies in the way. It’s really hard to bring new technology into an organization used to doing things a certain way. If you integrate a technology that is too complicated, the training sessions will take forever and people will quickly find a way to subvert the new process because it’s faster in the short term. Data management presents another challenge—if you remove a line of information from a form, it can be seen as antagonistic, even if that data wasn’t ever used or useful. Changing the way people work needs to be done thoughtfully.
Right now, it isn’t clear if we’ll be able to successfully replace a cumbersome paper process with a lightweight digital tool, or if the body will reject the transplant. Our goal is to reduce as many employee steps as possible, while also creating an easier and quicker process for the applicants. If the new process works, it can save time and energy for the organization. If it doesn’t work, we’ll pivot, and try something else. But regardless of if we succeed on the first try or not, the problems we face here will persist in so many other organizations and sectors.
So there’s more work to be done, but there is strong hope for the future. We should strive to minimize the burden of accessing benefits so that our most vulnerable communities can access services when they need it. In an ideal world, both government and nonprofit service providers take process redesign seriously, so in the future, you only have to fill out one form, and stand in one (hopefully short!) line.
Rhode Island’s unconventional approach to foster care
By Marina Martin
This article was originally published in the New America Weekly.
At any given time in the United States, a staggering 400,000-plus children are in foster care. Regrettably, there aren’t enough people—whether relatives or licensed foster parents—to take in all these children. On top of that, there are even fewer homes available when you take into account that each child needs to be placed in a home suited to her particular needs and characteristics. For instance, a three-year-old placed in a group home with 50 other children, or placed in a foster home two hours away from her grandmother or her preschool, might technically be “safe,” but she surely doesn’t feel at home. And in 2016, the median time a child spent in foster care was 13.9 months—a very long time to be away from Grandma.
Rhode Island, in particular, has struggled to find enough homes, especially enough of the right homes—those that speak the same language and are in the same school district and are near relatives, among many other potential criteria for making a good match. This issue is only magnified by the fact that the foster family on-boarding process is extensive and confusing, and often takes over a year to complete. All applications are received and processed on paper, making it difficult to keep track of each applicant’s status, not to mention time-consuming to pass the file from person to person.
And yet, in recent years, Rhode Island’s approach to foster care has begun to improve—and it’s become an example of the fact that, sometimes, technology isn’t the silver bullet we make it out to be.
The team at the Department of Children, Youth, and Families (DCYF) responsible for recruiting and on-boarding new resource families is small but scrappy. Its members spend hours on tasks you wouldn’t necessarily expect, like trying to match Blueberry the cat’s medical records, which just arrived in the mail, to the right family’s folder, since up-to-date pet vaccinations are a licensing requirement. This also means that, despite everyone’s best and most heroic efforts, these new families can experience the kind of frustration that ultimately causes them to drop out of the process, before ever taking in a child. The consequences of an overwhelmed staff and outdated process are clear: 75 percent of teenage foster children in Rhode Island are never placed with a family.
When I talk to fellow technologists about the challenges across the country in foster care, they often ask what they believe are glaringly obvious questions: Why not just use big data and machine-learning to solve the problem? Why not just match kids to homes with an algorithm? Why not just replace an aging IT system with Salesforce, or just put a form online to automate the process?
I admit that when I first started working in government, I had many of the same questions. But if working in government taught me anything, it was that, when it comes to issues like foster care, nothing’s ever just as simple as using technology.
As the chief technology officer of the U.S. Department of Veterans Affairs, from 2013-2017, one of my main responsibilities was recruiting private-sector technologists. During interviews, as I described some of the challenges before us, many applicants would rush to propose an obvious private-sector solution: Just move your applications to the cloud. Just use Google Docs. Just make a website through which veterans can apply for all their benefits. Again: just, just, just.
Modern technology isn’t a cure-all, and a change of any kind—technical or not—in a large bureaucracy will always have attendant consequences downstream. At the VA, moving a single application to the cloud for the first time required years of full-time work. From this, we learned the hard way the human costs of shifting to a cloud environment with a workforce that only has expertise with an on-premise data center, and has been given zero access to training. Clearly, the notion of making an online application was more complicated than it may have seemed to the casual observer.
Modern technology isn’t a cure-all, and a change of any kind—technical or not—in a large bureaucracy will always have attendant consequences downstream.
So, what did we do? And what implications might this work have for states beyond Rhode Island?
With no technology other than an Excel spreadsheet, the Rhode Island DCYF team organized a weekend event designed to get as many pending families as possible through their outstanding licensing requirements, a process that normally takes months to complete. Fingerprinting for background checks was on-site, staff members were on hand to answer questions and navigate edge cases, and training classes were offered so that families could finish the hours they needed over the course of one weekend. In addition, a physician was on-site to complete free physicals for those who may not have their own primary care provider. In the end, 174 families completed the weekend.
Above all, this meant that many more families were available for the hundreds of at-risk children in Rhode Island who needed safe homes.
But there was, too, an unexpected outcome from that weekend: the sense of community participants developed. Many described the weekend like a sort of summer camp, and appreciated meeting others whom they could lean on for support. That sense of community could mean that these families are more likely to foster longer and band together to support some of the children more difficult to place, like a group of five siblings.
If we’d taken the suggestions of the technologists I met—just build an online form!—opportunities like these, that allowed for human connection and community-building, would’ve been missed entirely.
Will technology be able to help the Rhode Island team better support these 174 new families, and continue to recruit and support new ones? Absolutely. Plus, some exciting technology modernization is already underway for DCYF. But Rhode Island also proved that you can get meaningful results with some process change and a spreadsheet. People there haven’t let technology dictate the path forward, and they haven’t waited to get results for children who don’t have time to wait. They forged ahead with the tools they had, while simultaneously working on a long-term technology plan that will be deeply informed by the needs of users, thanks to the initial groundwork.
Importantly, these lessons learned extend beyond Rhode Island. Foster care has no shortage of challenges across the United States, and there’s no tech magic that will definitively solve all (or, truly, any) of them. Rather, it’s typically the dedicated employees on the front lines, day after day, who see the challenges, know how to approach them, and come up with the checklists and wall charts that move the needle for at-risk children every day. As technologists, the best way we can help is to listen to these employees’ expertise and find ways to scale those checklists, freeing up their time away from data-entry and toward more face time with the real humans who need their help.
We ought to stop talking about just making a website or a form or a cloud-based system, because for too many problems, technology just isn’t the answer. Once we realize that, we can take stock of the tools we have, establish a plan, and do the kind of resourceful work that’d even make Angus MacGyver proud.
The most empowering tool for hurricane recovery
By Denice Ross
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate.
Six months after the federal levees failed and Katrina flooded 80 percent of New Orleans, I went to City Hall to try to get electricity restored to our house.
The government building’s seventh floor was so full that overflow needed to be moved to the ground-floor lobby. Most people waiting carried stacks of paperwork and photographic documentation of their damaged homes. One elderly woman ahead of me finally got her turn but walked empty-handed to the counter. She didn’t have a permit to file. She just wanted to know: Have any of her neighbors gotten permits to rebuild? Could they tell her which of her neighbors might be moving back? She didn’t have enough information to decide what to do next.
And she wasn’t alone. Without data on the rapidly changing housing and demographic situation, businesses didn’t know how many customers they might have. Charities didn’t know which services were most needed and where. Neighborhoods didn’t know how to prioritize volunteer efforts to rehab houses. The whole city was flying blind.
As months turned to years, people increasingly lost confidence in government agencies and philanthropy. News reports on federal dollars going to the region and donations coming into nonprofits were abundant, but people looked at their own stalled recovery and asked, “Where’s the money?” The lack of financial transparency only added to the sense of uncertainty and suspicion.
We can do better with Harvey and Irma.
Like Katrina, these 2017 hurricanes are all-hands-on-deck disasters. Government can’t do it alone. Public and private recovery efforts will need to align, but the required level of coordination will only be possible if everyone is working from a shared base of common information and trust.
The good news is that, 12 years after the storm that devastated New Orleans, communities today can take advantage of data transparency—a powerful tool that can help align federal, state, and local government efforts with those of the private sector and philanthropy.
That may seem obvious—of course these institutions should share relevant information with the public in times of need. But it’s harder than it appears. For one, transparency during crisis doesn’t come naturally. Some may worry that opening up data may open them up to scrutiny. Power companies might be nervous that outage data would make them look bad. Elected officials might worry that families won’t move back if environmental data reveal contamination or if crime looks to be on the rise. Nonprofits might be hesitant to open data on donations and outcomes because of public scrutiny of overhead costs. Institutions might worry about privacy or fraud—say, a scam artist targeting households that received recovery benefits. Or they may simply not realize they have useful information to share.
Communities today can take advantage of data transparency—a powerful tool that can help align federal, state, and local government efforts with those of the private sector and philanthropy.
New rules that require some federal and local government agencies to make certain data open and accessible to citizens have helped. But some high-value data sets might require public records requests, and politics can still get in the way. (This White House, for one, has increasingly put inconvenient data in its crosshairs.)
Transparency also isn’t just related to what information these organizations share but also how they share it. It’s most effective in the form of “open data,” or data that’s released in a structured, machine-readable format that can be downloaded, sorted, analyzed, mapped, and graphed. (In its highest form, it’s also available in an API, or application programming interface.) This digestible format makes it especially valuable for experts such as meteorologists, policymakers, data journalists, emergency managers, software developers, and advocacy organizations. It can also be a constructive resource for the general public, especially when software developers make it available through easy-to-use mobile apps and websites, like those that tell you when the next train is arriving.
Why is the format so important? New Orleans provides a good illustration. Just after the storm in 2005, the city government offered public access to information about building permits, but users could only look them up one address at a time. There was no way to analyze citywide permit data to, for example, see how many homes in a flooded neighborhood had construction permits for rebuilding above base-flood elevation, or to check what historic buildings were up for demolition. Maddeningly, the simplest way to answer these questions seemed to be to literally drive every street yourself.
We’ve learned a lot since then, and open data has already played a crucial role in the preparation and response to both Hurricane Irma and Hurricane Harvey. The National Oceanic and Atmospheric Administration’s geospatial data on active hurricanes, for example, gave residents and responders important real-time updates about the path and severity of the storms. The census’ emergency management map—which includes downloadable data on population density, language, disability status, household income, and vehicle ownership—has and will continue to be used by emergency planners to coordinate their response. And Miami’s Planning & Zoning team is collecting flood survey data, photos, and resident stories, and making it public as it comes in. This on-the-ground information provides a much-needed update to the Federal Emergency Management Agency maps that are used to model storm surges and will help citizens and city planners make informed decisions about waterfront zoning and land use.
But as I saw in New Orleans, recovery is a long-term process. Stakeholders in the recovery will expect—and need—regular, timely, detailed updates from a variety of sources. Some data is worth prioritizing over others—and some of it comes from surprising places.
As the elderly woman in line with me in 2006 understood, building permits—that first signal of a property owner’s intent to rebuild—are among the most important information that local governments can release to the public. After Katrina, for example, Harvard University got special permission to access the city’s permitting data. The team then analyzed the records to help the Broadmoor neighborhood identify and strategically invest in blocks that appeared to be near a tipping point of returning residents and businesses. Today, despite being one of the hardest-hit neighborhoods in the city, Broadmoor has rebounded to 90 percent of its pre-Katrina addresses.
Other New Orleans neighborhoods, however, ran into bureaucratic barriers when they tried to acquire the same information, exacerbating the already uneven recovery. In 2009, when I (as a citizen) asked the city of New Orleans to open up its permitting data so that all neighborhoods would have timely updates, officials told me that the city couldn’t release the information. They said that to do so, they’d have to pay tens of thousands of dollars to the software company that built their permitting system just to add a “download data” button to the public website. It’s a frustratingly common problem. Companies often intentionally build obstacles into their contracts or software architecture that make it difficult for clients to extract their own raw data (which, no surprise, makes it difficult for clients to switch their service provider). The tactic, known as vendor lock-in, also makes it arduous and expensive for local governments to open up their data to the public.
Companies often intentionally build obstacles into their contracts or software architecture that make it difficult for clients to extract their own raw data.
Some municipalities have already switched to software providers that, by default, provide systems that make it easy to share data with the public. Miami-Dade County citizens, for example, can view newly issued building, repair and demolition permits in nearly real time thanks to homegrown permitting software. As recovery efforts from the 2017 hurricanes continue elsewhere, it’s time for other cities and gov-tech companies to step up and make this kind of open data a standard feature.
Details related to parcels, those rectangles of land that people own and pay taxes on, are also among the most valuable post-disaster data sets. The boundaries are particularly useful as a scaffold over which you can map other information such as flood zones, building permits, code enforcement violations, 911 call origins, and the statuses of various services, such as water, electricity, mail delivery, and trash pickup. The Washington Post, for example, recently combined a Harris County parcel map and FEMA data to help readers visualize repeat flood damage in a particular Houston neighborhood. A map like this can serve to ground difficult dialogues about possible government buy-outs for neighborhoods with repeat flooding.
Despite their foundational role in disaster recovery, parcel data sets are often not released due to vender lock-in and because some jurisdictions sell, or think they can sell, that data. In post-Katrina New Orleans, neighborhood groups unable to get the information through public record requests traded bootleg copies of the parcel map. Even federal agencies such as FEMA struggled to get this data from the local government. When I was co-directing the nonprofit Greater New Orleans Community Data Center, the feds asked me for parcel data sets. I had to direct them, too, to the bootleg copy. When I joined the city of New Orleans government in 2010, one of the first things I did was release the parcel data—an act that was met with hugs from grateful residents at public meetings.
Accurate data on location and status of open businesses also serve as a lifeline to returning residents, who may have narrow windows between work shifts, caregiving, and rebuilding to pick up essentials, and may have limited transportation options to get from place to place. The data can also provide crucial information about what neighborhoods are being left behind in the recovery. When I returned home four months after the storm, with an 18-month-old child on my hip and a husband deployed overseas with the military, I relied on signs in the neutral ground (New Orleans speak for median strips), word of mouth, and trial and error to find an open hardware store, grocery store, or pharmacy. After disasters, local journalists often manually compile lists like this “Find Open Stores in Your Area After Harvey” feature by Houston Public Media. But these can be difficult to maintain. To be a definitive resource, you first need a complete, accurate list of all the businesses in an area. You then need regular, in some cases daily, updates on their hours of operation.
Open government data solve the first problem. States and cities typically have business license data that include the address and business type, such as hardware store, pharmacy, grocery store, or gas station. They’ll also have data about re-inspection for licensed facilities like child care centers, nursing homes, medical facilities, and restaurants. But that information only goes so far. To make up for the gaps, some places have been creative. After flooding in 2016, the Louisiana Business Emergency Operations Center created a website where people could tag a business as being open or closed. Areas recovering from disasters could also ask credit card companies to release nightly data on businesses that have processed customer transactions in the past 24 hours, a form of “passive crowdsourcing” that helps take the burden off residents and journalists to keep this information updated.
Passive crowdsourcing has also played important roles tracking whether people are returning home—arguably the single most important indicator of recovery in a neighborhood. There’s someone who usually knows which houses are occupied and which are vacant: the mailperson. In 2007, the Tampa Bay Times published a fantastic profile of Charles McCann, a veteran letter carrier who tracked recovery progress in New Orleans as he walked his daily Lower 9th Ward route. The U.S. Postal Service typically does not allow for public release of this information, which is part of its paid Delivery Statistics Product. But if the USPS fails to step in, direct mail marketing companies could fill the gap by sharing their data about addresses receiving mail—as the coupon circular company Valassis did to track repopulation after Katrina.
Data can also play a vital role in addressing public safety concerns, which don’t end when the wind dies down and the water recedes. Sharing open and accurate information from police about crime reports, arrests and citations, use of force, traffic stops, and 911 response times will help keep trust and accountability flowing in the tense atmosphere of a long recovery (an area where the New Orleans Police Department notably struggled).
Governments must also treat environmental data as vital public safety information. In the aftermath of Katrina, I never felt like I knew for sure whether the air, soil, and water were safe. We chose to stay, but two families on my block never returned, concerned about raising their children in contamination. Reports of toxic floodwater and contaminated air are already widespread in Houston. Though the Texas Commission on Environmental Quality website doesn’t publish its findings in an open format, a local civic tech company created an open-source script to scrape the data so researchers, journalists, and the public could analyze it. It was a crafty move, but we shouldn’t have to depend on the benevolence of private citizens. Governments and even nonprofits (such as the Environmental Defense Fund) should make their environmental testing data open by default.
This is just a sampling of the high-value data sets that can aid communities in Texas and Florida as they rebuild after their devastating storms. For those involved in the efforts, I worked with veterans of Katrina and other disasters to assemble a more complete—and growing—list here. For those affected, data transparency can make recoveries more predictable, fair, and efficient. It aids citizens considering when and whether to move back, businesses debating whether to invest, and government and philanthropy deciding how to spend dollars responsibly.
Perhaps more importantly, it provides the kind of information that allows residents impacted by storms like Hurricane Katrina, Sandy, Harvey, and Irma to feel like they can play an active role in our democracy and have a say in the shaping of their collective future. Disaster heightens citizen engagement, and all levels of government will be better served if officials and citizens can use accurate and open data to have constructive dialogues about how to move forward, based on a shared base of information.
Five years after the storm, the transparency tide finally turned in New Orleans. I distinctly remember the moment at a crowded City Hall meeting when I saw the power of what open data can do. The topic was the tens of thousands of storm-damaged, vacant buildings still plaguing the city. A resident came to the podium to complain about her neighbor’s falling-down house, which was attracting vermin, crime, and illegal dumping, and making it hard for those on her block to move forward with recovery. She gave the address and pulled out a printed piece of paper, a printout from BlightStatus—a simple tool created by a team at Code for America that made it easy for anyone to look up recently opened government data on the current state of blighted properties—that detailed every time inspectors had visited the site, what they found, the property owners’ responses to hearings, and more. The city’s director of code enforcement pulled up the same app on his tablet and entered the address. City Council staffers flurried to do the same for their bosses. For the first time since the storm, everyone had the same information. There was no arguing about the facts. Instead, the conversation that ensued focused only on possible solutions.
We need to empower a new generation of technologists who want to work for the public good
By Andreen Soley
This article was originally published in The Commons.
The demographics of high school graduates are shifting fast. Over the next 10-15 years, Hispanic high school graduates are set to increase by 50 percent and Asians and Pacific Islanders by 30 percent, while African Americans and whites decline by 6 percent and 17 percent respectively. For colleges, this means that their student bodies are increasingly non-white, first generation students, many of whom will also be over the age of 24 and working part-time.
For those who have long endeavored to make the technology sector more diverse, this marks an incredible period of growth and potential for change. But how can they ensure that those who want to use their technological skills in service of the common good find pathways to economic growth and social mobility? Universities are working on the answer.
At a summit New America hosted with the Ford and Hewlett Foundations this summer, university presidents and provosts met to talk about the role of universities in growing the field of public interest technology (PIT). Through their curricula, career advising, fellowships, and internships, universities are uniquely poised to help students develop the relevant techno-social skills needed to work in PIT. Universities should graduate students with fluencies in technical fields paired with an understanding of an ethical, legal, and policy framework; by doing so, their graduates can assess and incorporate the societal implications of technology into their work.
Given student demographic shifts, any university field building work needs to understand which universities have attracted and graduated technologists of color. Historically Black Colleges and Universities (HBCUs), which make up just 3 percent of all postsecondary institutions,awarded 17 percent of all Science, Technology, Engineering, and Math (STEM) baccalaureate degrees earned by Black students between 2002 and 2012—which is actually a drop from the 24 percent they graduated in the 10 years prior. So, while HBCUs are doing the lion’s share of diversifying STEM for African-Americans, their ability to produce STEM graduates has eroded, often as a result of a lack of funding and resources. Similarly, Hispanic Serving Institutions (HSIs) are key drivers in increasing the educational attainment for Hispanic students. Despite occupying less than 5 percent of the higher education space, HSIs enroll nearly 50 percent of all Hispanic undergraduates, and produced40 percent of the Latinx STEM bachelor’s degrees in 2010. Even so, HSIs face many of the same resource challenges as HBCUs.
Universities should graduate students with fluencies in technical fields paired with an understanding of an ethical, legal, and policy framework.
If PIT is a rallying cry for universities to shape the impact of emerging technologies on the world, we must acknowledge that the inequitable division of resources and labor will make that work difficult. To ensure that a consortium of PIT-dedicated institutions garners broad-based support from the full range of universities and colleges, we must work to:
- Promote a baseline level of digital literacy or technical intuition for all undergraduate students, so they can navigate the technological choices they will increasingly have to make regardless of the industries they pursue as career.
- Seek to align PIT aims and goals with accrediting body standards to ensure that new course content can be incorporated more easily to meet institutional priorities. Students will therefore not be penalized by attending institutions that are not able to be as flexible with course creation.
- Support collaborative strategies. A well-resourced university can provide access to new courses or internship experiences for a less resourced partner, while the latter provides the former with access to community organizations and students who bring a different perspective to their work.
- Create opportunities for universities and colleges to advocate for measures that will broaden the impact of PIT. They can support loan forgiveness and repayment, as well as need-based financial aid programs that allow for an expansion of the student pipeline for PIT.
Technological innovations are disruptive, often exasperating inequality, but herein lies an opportunity for PIT to serve as a corrective by empowering a new generation of technologists who want to work for the public good. Because technology is often designed to reflect the lived experience and education of the developer, a broader pool of technologists working in the public interest can create the solutions we desperately need for issues specific to diverse populations. In order to ensure that technology in the public interest takes root and grows, we need to listen more carefully to those who have been preparing students for the world that lies just around the corner, serving a public that is increasingly dissimilar to the students of the past.
If you want to contribute to the university conversation about public interest technology, please contact soley@newamerica.org to learn more about university partnership initiatives.
Rethinking teaching and learning with open educational resources
By Kristina Ishmael
Originally posted on the Public Interest Technology blog.
Twelve months ago when I joined New America, my charge was to build and sustain momentum around the use of Open Educational Resources (OER) in PreK-12 schools across the country. I had just wrapped up a fellowship at the U.S. Department of Education where I led an initiative that supported educators as they transitioned from traditional textbooks to OER, or materials that are free to download, modify, and use. This movement provided a catalyst for school districts to reconsider their traditional textbook adoption process, but it had yet to reach even a full 1 percent of school districts in the United States.
I was first motivated to do this work in 2007 when I taught Kindergarten in Omaha, Nebraska. I welcomed 20 students into my classroom, 16 of whom were English language learners, and simply knew that the district-issued curriculum would not be sufficient for their learning. Like many other teachers, I found myself spending countless hours outside of teaching looking for additional instructional materials—all so I could help meet the needs of my students and make learning relevant.
Several years later, I joined the Nebraska Department of Education where I supported 245 public schools in urban, suburban, and rural settings as they worked toward developing robust digital learning environments. As I worked with districts on their strategic plans and professional learning around bringing devices into the classroom, it became apparent that access to digital content and curriculum would need to happen alongside digital hardware upgrades.
From my own experience searching for resources that I could use with my students, and my experience supporting districts in this regard, I knew that OER was the way to go. And luckily, teachers, instructional coaches, curriculum directors, and other educational professionals had been developing an abundance of OER—not only individual lessons and activities, but also full curricula, courses, and other professional learning resources.
But knowing they exist did not make them easy to find or implement. The curriculum procurement process in PreK-12 schools is entrenched in tradition and often dominated by publishing companies with large marketing budgets. OER represent a collective, decentralized movement without a marketing budget, so districts don’t consider them.
Three years after the launch of the national initiative to grow PreK-12 OER, the education field is slowly starting to recognize OER as a viable option for instructional materials. But there are still many challenges to address. Over the past 12 months, I sought out explicit focus areas around OER implementation, while sustaining momentum for broader awareness.
It became apparent that access to digital content and curriculum would need to happen alongside digital hardware upgrades.
The first and most important focus area is building awareness. The national initiative was started by the U.S. Department of Education’s Office of Educational Technology in 2015 and is often considered a “tech” initiative. But in reality, OER should be part of larger conversations about curriculum and instruction. These resources have the potential of helping districts rethink the goals of teaching and learning, implement new instructional models, empower teachers as subject matter experts, and provide timely and relevant materials to use with students. New America launched an In Depth site to provide basic information about OER, recognize districts across the country using OER, and provide a curated list of resources for getting started and professional learning. Our hope is that this In Depth broadens the conversation, shows the many uses of OER across the country, provides resources to get started, and continues to build awareness.
The second focus is discussing quality of content. OER has been largely driven by the contributing community, and because of that, content has continued to improve as the open license allows anyone to revise content to make it more dynamic, localized, and updated. There have also been more comprehensive curricula made available with an open license, such as Illustrative Mathematics. In January 2018, Ed Reports, an independent nonprofit that reviews K-12 curricula, reviewed the 6-8 math program and gave it one of the highest rated scores for any series. It happens to be OER and can be procured by a district for free, as well as further customized to meet the needs of their students.
The third focus is developing a core group of stakeholders to sustain momentum. The stakeholders involved already are diverse in roles and locations—state leaders, district administrators and teachers, researchers, nonprofit organizations, and foundations. New America and the International Society for Technology in Education (ISTE) worked to bring together a vibrant mix of these stakeholders at two separate convenings this past year to focus on mentoring and support structures, sustainability, policy, and research. Together, this coalition committed to concrete action items that will continue to advance OER in the coming years.
OER has incredible potential in the hands of PreK-12 educators across the country and it is my goal to continue building momentum. As I reflect back on a year’s worth of work, I simply see the hundreds of faces of educators with whom I’ve worked alongside. I see district leaders making critical decisions about instructional materials that their teachers will implement in classrooms with populations of students that are incredibly diverse. I see teacher leaders doing the hard work of unpacking standards, discovering resources, and serving as subject matter experts to make decisions around materials that will best suit the needs of their students. I see OER as a way to rethink teaching and learning.
Designing for health
By Sonia Sarkar
Originally published on the Public Interest Technology blog.
Wrapping your arms around healthcare as an industry and a service can be frustrating. It’s also universal—nearly all of us have had a patient or caregiver experience that exposes the fragmentation, complexity, and non-user-friendliness of standard medical care.
One of the biggest and most fundamental challenges of our existing healthcare system is that it is set up to treat you when you’re sick, not promote health or prevent patients from getting sick in the first place. Our health is much more dependent on what happens outside of the clinic walls in our day-to-day lives and communities than any prescription or diagnostic test. The "social determinants of health," things such as job stability or access to healthy and nutritious food, have nine times greater of an impact on health outcomes than clinical care. And yet, we don't define, deliver, or pay for healthcare along those lines.
Over the past 12 years, I've seen this issue from a number of different angles—first as a student advocate working directly with patients in a busy pediatrics clinic in East Baltimore, then as part of an organization leading a national movement around addressing patients' basic needs (housing, food insecurity), and most recently at the Baltimore City Health Department, on a team supporting a city-wide approach to addressing social determinants.
And in that time, I've seen significant momentum around recognizing social determinants as key ingredients for good health—in 2016, the Centers for Medicaid and Medicare Innovation (CMMI) announced Accountable Health Communities, a $157 million fund for institutions that are addressing patients' social needs at scale. Awards were granted to 32 demonstration sites, each of which will screen up to 75,000 Medicaid and Medicare beneficiaries and then direct them to the community resources that will address their needs, via technology solutions such as developing a city-wide database of those resources and workforce solutions such as deploying community health workers who can build long-lasting patient relationships. In the past two years, the number of tech vendors focused on the social determinants of health has increased dramatically, and health systems are increasingly naming this work as part of their core strategy.
Encouraging as these developments are, however, they also raise new questions. As a Public Interest Technology fellow, I’ve been particularly obsessed with this one: How do we leverage technology to support this transformation from a medical care system to one that promotes health and well-being?
On May 11th, practitioners from across the Mid-Atlantic region gathered at New America to explore this question in depth. With representatives from local health departments, nonprofit organizations, federally qualified health centers, social service providers, technology vendors, and large health systems, together we explored what it would look like to move beyond "social determinants of health" as a buzzword, and instead adopt it as a comprehensive strategy.
In the past two years, the number of tech vendors focused on the social determinants of health has increased dramatically, and health systems are increasingly naming this work as part of their core strategy.
Three key themes came up. First, it's impossible to design a new health system and the technology that comes with it without the real end-users in the room. "Patient-centered" is a term that gets thrown around often in healthcare, but when you’re talking about designing a screening tool to understand a patient’s housing situation or risk for interpersonal violence, the voice of that patient is crucial. As communities and healthcare stakeholders pursue this work, we need to think about how to move beyond one-time focus groups or occasional surveys to instead integrating patients into the decision-making process as subject matter experts in their own lives and communities. By soliciting direct user feedback, designers and practitioners can help limit the unintended negative consequences of new technology, as well as ensuring its usability and efficiency.
Second, as we move towards a more integrated view of what health looks like, we also have to exercise caution. A unified window for practitioners that shows both the healthcare and social services accessed by a patient sounds helpful in theory. But what does this mean for patient privacy, consent, and confidentiality? As we grapple with the use of individuals’ data across multiple sectors and use cases, how do we balance usefulness with patient preferences? While there are no straightforward answers, two places to start include implementing straightforward was for patients to give consent around who has access to specific types of their data — and then ensuring transparency around who is actually accessing it.
And finally, how do we think about the other side of the equation—the community organizations, social services providers, and others who are providing services essential for good health but who aren’t seen as current fixtures of the healthcare system? Facilitating connections between the healthcare system and these stakeholders is predicated on first building a technology workflow and infrastructure that makes sense—a small food pantry, for example, may not have the capacity or the systems to receive an electronic referral from a clinic and then track whether the patient has actually obtained food. So for this to work, we also need to understand the social services technology infrastructure and invest in expanding it and modernizing it.
Even given these considerations, the movement towards patient health (rather than sickcare) is an exciting one. Redesigning our healthcare system requires every tool at our disposal—and technology, with its ability to streamline processes, surface the right data at the right time, and link together disparate organizations, is an essential one. Building a great health system requires intentionality and engagement, and soliciting user feedback from patients, community providers, medical professionals—from all of us—on what this new future could look like is a key place to start.
Creating awareness and action through mapping
By Jeremiah Lindemann
Originally published on the Public Interest Technology blog.
Eight months ago when the Opioid Mapping Initiative started, a rare few governments were mapping data for overdoses and deaths due to the opioid epidemic. If they were mapping, it was often for internal purposes, and public displays were unlikely. While it is still far from a common practice, many local governments are beginning to realize the benefits that arise from engaging their communities with data.
Though the initiative began just eight months ago, my inspiration for this work has been long established. I lost my brother, J.T., to the opioid epidemic in 2007, and have been seeking ways ever since to tell the human story about great lives being lost, while also getting real data available for public awareness and better decision making. I started the initiative scrappily—stalking local governments and setting up Google alerts on agencies that were doing this work. I quickly learned who the leaders in this space are, places like Northern Kentucky Health, the Tri-County Health Department, and Oakland County. But even with their progress, no local government had identified a comprehensive set of opioid response solutions, so these early innovators have been willing to learn from and share with each other on monthly webcasts.
Beyond idea-sharing, one of the central benefits of this convening is to pool resources for national awareness. Most of the data and maps related to the opioid epidemic within the last few years have been community resources, showing where to drop off unused medications or access naloxone and treatment facilities. While these are extremely useful, we set a larger goal—assembling national maps for the locations of permanent prescription drop boxes and naloxone access points. These maps are then able to provide a great start for cross-state initiatives, such as nonprofits that do opioid education across multiple states.
This network has also led to some significant idea movement as local governments share their strategies. For instance, the City of Tempe shared the methods they use to map EMS overdoses, and Northern Kentucky swiftly replicated their dashboard. There has also been an emergence of new datasets that hadn’t before been widely mapped, such as the hospitalizations that Alameda County, Calif. is mapping and the mapping of sewage testing for opioids being done in Tempe. Newer entities to the initiative, such as Cook County, Ill., have been able to learn from other local governments the importance of real-time data. Now, they are using live data from the county medical examiner’s database in order to coordinate with the Health Department and Law Enforcement agencies. The visualizations and dashboard techniques they used were inspired from other governments that presented about their work on the initiative's monthly webcast.
The Opioid Mapping Initiative will continue pressing on, because unfortunately, all data trends show that the opioid epidemic is not getting better. But as timely data becomes more critical, local governments using tech and mapping are leading the way for others to learn innovative new strategies to combat the epidemic.
My hope is that one day, data will be used in ways we haven’t yet considered. In much of the country, there is still a need for available timely treatment and medical-assisted treatment. In these cases, data from overdose hotspots could be used proactively for areas underserved by prevention and treatment initiatives. As more governments become willing to use innovative strategies, I believe that data can truly begin to shape opioid epidemic policy and save lives.
Financial Inclusion & Citizen Participation Project: Bridging the data gap for low-income communities
By Michelle Thompson
Originally published on the Public Interest Technology blog.
How can we gain a comprehensive understanding of the economic life of low- and moderate-income communities and those experiencing disinvestment and limited growth? Middle- and upper-income communities employ a higher use of traditional financial services, electronic payments via credit/debit cards, and online banking. Given their engagement in these formal financial systems, information on the spending patterns of middle- and upper-income communities is better understood. Information on the economic life and spending power of low- and moderate-income communities is limited due to greater use of cash, the prevalence of alternative financial services providers, and the limited incentives to conduct a comprehensive analysis of market potential.
This limited view has far-reaching impacts and ultimately inhibits regional economic development, neighborhood growth, and the individual quality of life for those living in areas affected by information inequities. One particularly lacking source of information is the census, which has limited data on low- and moderate-income communities who have traditionally been undercounted. The census, along with private data sources, provides a powerful policymaking and budgeting tool for the government and private sector companies alike, informing their investment decisions for new building developments, retail hubs, supermarkets, and infrastructure projects. Without a holistic understanding of each neighborhood, the decisions made can further inhibit the economic growth of low- and moderate-income communities. Neighborhood and government advocates who seek a remedy for this information inequality have not yet found a comprehensive data solution.
Building on federal, state, and local public information with new data and analytic tools is critical. Part of the solution has involved public participation geographic information systems (PPGIS), a model which has been used to evaluate neighborhood health, wealth, and quality of life through a top down/bottom up model to integrate community, university, and municipal information. What studies utilizing this model lack, however, are the stories and experiences of communities that have been affected by a lack of accurate and comprehensive information. Traditionally, community voices have been collected independently through community planning advocates who describe the financial and personal impact of “retail deserts” (including food, banking, education) by collecting oral histories or providing case studies, but this information is rarely included in analysis.
The last element needed for understanding community financial power is often inaccessible to community advocates and governments alike: consumer spending insights. These insights may help integrate the above sources into a new model for better understanding the economic life of low- and moderate-income communities. By having a more robust view of community financial well-being and consumer demand through a “middle data” solution, citizen scientists (community planners with expertise in converting neighborhood knowledge into spatially-referenced data) can better assess the purchasing power and economic circumstances of at-risk communities. This, in turn, could give businesses a more accurate idea of community wealth, encouraging them to expand their services into previously untouched neighborhoods.
With a hope to contribute to the citizen scientist movement, the New America Public Interest Technology (PIT) initiative and the Mastercard Center for Inclusive Growth (MCIG) have developed a project that will examine, demonstrate, and share the findings of p3GIS, or a public and private participation geographic information system. In this model, “public” will include community voices, municipal data, and national data, providing a composite neighborhood profile. “Private” will include insights derived from aggregated and anonymized transaction data that may provide a better understanding of consumer preferences, demands, and financial wellbeing.
This effort, called the Financial Inclusion & Citizen Participation (FI&CP) project, is unique in that it leverages the insight of Mastercard anonymized and aggregated transaction data to evaluate low- and moderate-income community financial health. Such data-driven findings will allow the FI&CP project to pioneer a collective research approach that utilizes all possible data sources. Results from this study may further understanding of how to address barriers to financial access, point to new needs and types of financial services, and provide innovative data streams to sufficiently predict spending behaviors and provide a more accurate picture of the economics of low- and moderate-income communities. By having a better representation of neighborhood needs, these communities may see increased investment and a wider variety of services, opening opportunities that might otherwise seem unattainable.
Fighting for civil rights in the age of technological innovation
By Clarence Wardell
Originally published in the Public Interest Technology blog.
On September 8, 2014 I was sworn in as member of the third class of Presidential Innovation Fellows, a program launched by the Obama administration to bring top technical talent into the federal government to work on some of its most challenging problems. From the beginning, the idea of the fellowship was a gamble—much was unknown as to whether the administration would be able to attract the talent needed, or even whether the fellows would be able to make any significant progress on their designated projects once they showed up. As someone who had devoted a significant amount of time to projects in the civic technology space, and given my desire to connect those projects to larger institutions, the fellowship represented a door that I was glad they had taken the risk to open.
The opportunity ultimately afforded me a seat at the table as the administration began to deal with the national movement for police reform that swelled after Michael Brown was shot and killed by a police officer in Ferguson, Mo. on August 9, 2014. As the administration debated how to respond, a colleague and I found ourselves sitting in a room normally filled with lawyers and policymakers with an opportunity to contribute a different perspective to the conversation. While we were met with some skepticism initially, the public outcry for justice demanded that a wide range of possible solutions be considered. Because the administration had already been building a team in the Office of the U.S. Chief Technology Officer that was keen on bringing the perspectives of tech and innovation to these policy discussions, we worked with them to propose a path forward for broader public access to data about police-citizen interactions.
Ultimately, we launched the Police Data Initiative, which at its core sought to empower local communities in building trust between law enforcement and residents with data and transparency at the foundation of building that relationship. It was a solution born of our experience working in the civic technology and government open data spaces prior to joining the administration. By the time we left, the project had grown to include over 130 law enforcement agencies committed to sharing data publicly, and over 200 data sets released. Not only did this work help lead to more transparency and accountability in some communities, but it also helped set a national model for how data can serve as a neutral convener and be used to drive important conversations around policy reform, particularly those that involve discriminatory treatment of residents.
A team of technologists, lawyers, and policymakers working on public policy was an unlikely configuration in 2014, but this work, alongside other impactful projects in the administration, demonstrated a shifting landscape where still more seats were being added to the table. It proved what could be accomplished when conversations that were previously kept in the closed circle of policy advisors were opened to new voices and innovative approaches. I joined New America as a Public Interest Technology fellow hoping to build on this momentum.
By the time we left, the project had grown to include over 130 law enforcement agencies committed to sharing data publicly, and over 200 data sets released.
Over the past year I’ve explored opportunities for further meshing tech and innovation tools and approaches within the civil rights and social justice space. This has included working with legacy organizations like the Leadership Conference for Civil and Human Rights in order to understand their work and the evolving role they can play as a well-respected advocacy institution, as well as spending time with newer organizations like Raheem.AI, a civic-tech nonprofit that collects data and stories about resident encounters with law enforcement. By working with newer organizations in the social justice space, I gained a clearer understanding of how small and scrappy projects are building social justice and technology into their core missions. In part, my goal for the year was to figure out how to bridge the gap between the two ends of this spectrum—how to unite larger organizations who hold sway on policy but are behind on innovative tactics with smaller organizations who lack established reputations but intuitively understand the powerful nexus of technology, data, and organizing, in order to leverage the comparative advantages of both for real gains. I believe there is a lot of opportunity in the gap.
While I saw progress in bridging that gap, it has been slower than I had originally hoped. Due to the current climate in which civil and human rights are under intense daily assault, there is little time and resources left for exploring new approaches to problem-solving, particularly those that are technology driven. But regardless of whether they have time to explore, the pace of innovation isn’t slowing. New approaches that embrace technology as a central tool for fighting civil rights battles will be absolutely critical to the success of these organizations in the coming decades. Organizations like the ACLU are showing what is possible when legacy organizations are able to marshall resources toward technology and innovation in moments of crisis. Their work to not only bring in and support technical talent, but to use data visualization and technical tools to educate, engage, and mobilize their members around complex policy issues such as gerrymandering is a prime example of opportunity that lies in the aforementioned gap.
Work like that of the ACLU provides a North Star for the field. Their efforts can provide a blueprint for others, and ultimately, if successful at increasing their ability to register and engage voters around the protection of civil liberties, it will further underscore the need for these capabilities to exist more broadly in the civil rights and social justice communities. It is work that I remain invested in, and moving forward I intend to highlight examples of success and possibility in this space, as well as offering my time to collaborate with those who are pushing new approaches for solving some of our most urgent issues as a nation.
What we mean when we talk about civic tech
By Hana Schank and Sara Hudson
Originally published in the Public Interest Technology blog.
As anyone who has spent time working in or around civic technology knows, certain healthy debates tend to come up time and again, get batted around for a bit, and then fall away unresolved.
Are we most effective “fighting fires” and resolving technological disasters as they arise? Or are we ultimately better off trying to change government culture around design and technology?
Is it more useful to act as consultants or specialized innovation teams advising across agencies? Or is change more likely to happen when we’re fully integrated into government structure but also often more isolated from other civic techies?
Can this work be apolitical? Or does it function best when we are aligned with clearly stated goals that come from the top?
What do we call this field? What do we call ourselves? What are we all really doing here?
We had these questions too. We’ve both worked in government at the federal and city levels, and frankly, we were tired of the ongoing same debates around best ways to get the work done. We wanted answers. We decided to go out and find them.
As PIT fellows, we’ve spent the last eight months interviewing people working in and around government to improve the ways we deliver services and support to people across the country. We started our research because we’ve seen firsthand how people reinvent the same wheels, often without connection or knowledge of like-minded folks creating wheels one town over, or even one agency over. What we found were not only answers to the questions above, but so much more.
We heard that the field is disconnected. That people feel alone. That the work is hard—not just for you, for individuals, but for everyone—and that people crave connections and a way to share ideas and resources. And we have thoughts on how to help. We’ll be sharing some of those at the CfA summit.
We also learned about how we, people in the field, can be most effective and common misconceptions about how to do it. We learned that if you’re calling it “innovation” you might be thinking about it wrong. That real change can come from the smallest of places, as small as a single line of code or the redesign of an envelope. We learned about how to hire the right people—and that those people might not be the ones who come to mind when you think of “civic tech.” We’ve gathered stories about great successes and spectacular failures. What works and what doesn’t. Who’s been able to get great work done and where it’s been more of a struggle. We’re eager to share more with you on the current state of the field.