The Journey Matters: resisting tech solutionism, building community, and rewriting narratives
A conversation with Lia Holland exploring how storytelling opens space for meaning, equity, and imagination
Lia Holland is a digital rights advocate at Fight for the Future and an author who organizes at the intersection of technology policy and creative storytelling. She spearheaded the organization’s Stop Copaganda toolkit project, which brought together writers, activists, and technologists to reimagine how surveillance technology is portrayed in fiction. In this interview, we explore her approach to creating productive friction in tech systems, the power of community organizing beyond government regulation, and why she believes the journey of resistance is just as valuable as the destination.
Editors’ note: This article is part of an expert interview series, which aims to disentangle form, function, fiction, and friction in AI systems. We invite you to inhabit the adjacent possible worldviews and engage with the conversation that follows through our call to experience it.
Enjoy!
Confronting Bad Ideas: The Drive to Challenge Tech Solutionism
Question: What motivates your current work? What are the values, visions, and metaphors that are driving what you’re doing today?
Lia: There are so many different compounding motivations. At Fight for the Future, we’re primarily driven by the need to stop assigning tech the responsibility and perceived capacity of fixing social ills. We often confront ideas like ‘if we could just censor the right people, then everything would be better online’ or ‘if everyone is surveilled and big brother is watching, then we’ll all be safer.’ Obviously, what I said there is quite facetious, but it does end up being a distillation of the thinking among policymakers and even a fair amount of left-leaning human rights advocates when it comes to safety online. So much of the harms boil down to rampant social inequality and lack of resources–financial, health resources, mental health resources, education, etc. It’s frustrating to see more and more investment in technology to oppress those people instead of investment in systems where those people do not need to be oppressed.
This dovetails with the fiction work I do as an author. I’m working on a lot of mystery stories right now on topics including: harm reduction and gender-based violence at music festivals, Oregon’s timber industry, and the political machinations happening around forests that are “managed” yet burn hotter and are filled with pesticides polluting groundwater supplies.
So all in all, I’m motivated by the idea of confronting bad ideas. At Fight for the Future, we really take pride in being internet Mean Girls and clapping back.
When Technology Polices Students: Lessons from the Proctoring Wars
Question: Could you share a story where you or a fictional character experienced friction in an interaction with technology that was productive (or unproductive)?
Lia: At Fight for the Future, we’re thinking a lot about the use of technologies in education. One of Trump’s new executive orders describes only providing federal funding for AI in schools that parrot conservative talking points–essentially, he would prefer that AI not teach perspectives that he doesn’t agree with.
I remember when a previous technology was injected into the education space: proctoring technologies during the pandemic. AI-powered assessment tools stepped in to perceptually replace the instructors and police students for cheating on tests. The result was that people with darker skin, particularly Black people, couldn’t even be detected by the systems used during the bar exam in California. There were accounts of students getting penalized if they took a break or left the room on sometimes multi-hour tests. Some Twitter accounts were retweeting students with their bottles of urine because they weren’t moving because they didn’t want to fail! Many instructors said, “Well, if the computer says that the student is cheating, then I trust the computer.”
This was a moment of incredible experiential friction for the students, the educators, the tech services, and activists as well. But another problem they faced was the incredibly litigious nature of the e-proctoring companies. If an instructor said anything negative, the company would send a cease and desist letter and even threaten their institution. They sued students who tweeted about their publicly available code. This created an extreme culture of fear, which we feel again today with a lot of these AI technologies. Another example is ICE’s facial recognition database that Palantir is building for the US government. We have to ask: what is the work these technologies are really doing in our communities and who are the people that they’re ostensibly supposed to serve?
At Fight for the Future, we created a website called Proctorio is worse than a proctology exam.com to reset the Overton window1 of the public perception of the company by sharing about the experiences of students around the world. This way, educators and other organizations didn’t have to be afraid to speak out because what we were saying was much worse. So activist friction is what comes to mind for me. These corporations try to eliminate friction–eliminate the human entirely. My aspiration is that when they try to do that, they actually encounter more friction from people who see the harms in trying to eliminate, replace, or disregard the human and their impact on actual human beings.

Friction as Journey: Why the Process Matters as Much as the Achievement
Question: If you could take a step back and describe: what is friction for you?
Lia: Friction is the journey. It is what you go through to achieve your goal. For me, the friction is just as valuable as the achievement because it’s where I learn, connect with other humans, and grow. That is such a great question because it ties up so much for me. The worldview we’re fighting against is one in which ultimately technology is the elimination of the opportunity to connect, grow, improve, or learn. I’ll leave it at that.
Beyond Government: Building Community-Powered Resilience
Question: When we can no longer rely on government regulation, what other kinds of interventions do you think could make a difference?
Lia: The government is only one body of organization–it is often the largest and most powerful, yet also slowest and least effective. Mutual aid and the communities of our own organization are another frame through which to think about how we might implement the friction we need to have just technology in this era of climate disaster. For example, parents, PTAs,2 and state education departments – all of those are institutions that might take a stand and require a different set of principles for the technologies that they purchase.
There are a lot of echoes going back to President Bush’s No Child Left Behind Act, which included a provision that basically banned comprehensive sex education from any school receiving federal funds. My state decided not to take those federal funds and offer comprehensive state sex education. In the same way, there could be a movement where schools are deciding to buy their own AI, or are coming together in a consortium to demand open source and auditable AI, along with other safeguards and frictions. The tools of bureaucracy – in state institutions, corporations, or community-run projects — have a lot of power to slow this down. Just choose a community or a connected group of people, find their interests, and start organizing.
The Artist’s Dilemma: Navigating Surveillance Capitalism in Creative Industries
Question: What do you think is the role of writers, artists, and creatives in helping us imagine alternative pathways for navigating existing challenges in AI use and governance?
Lia: It’s incredibly challenging because, by and large, the ability to be successful as an artist is intermediated through galleries, Spotify, publishers, etc. where so much is so opaque. On one hand, your publisher is saying you need to oppose AI because it’s stealing from you by training on your works. Meanwhile, on the other hand, they are cutting a giant deal to let an AI company train on those same works–and you won’t see any of the money. All at the same time, the AI company is discovering how essential those human-created works are for AI–and that humans are not replaceable. There’s so much richness there. We need to see the ties to the bigger systemic issues–record profits in the music industry and publishing industry, record income inequality, and decreasing incomes for authors and musicians. The most important thing is for creative people, no matter their medium, to recognize that what is happening to them is systemic and the forces they are fighting are all the same: capitalism, surveillance capitalism, monopoly, etc.
That should be a different conversation than the one about how the person who never received enough education to be able to write a resume is stealing from artists when they use AI to write that resume. As if that is at the root of why I, as an author, can’t buy groceries. There’s a lot of reductionist thinking and pitting artists against those with even less privilege than them. This is being done intentionally because corporations see this moment of open source, decentralized technology, and AI as what could finally be the pivot to community control of arts and mediums for consumption owned by the artists, which would make these giant corporations completely irrelevant. They are really scared shitless about that. So they’ve put so much energy into making sure that artists won’t eat the rich that are profiting from them, but eat the small guy. Then, they’ll attack the AI companies and force them to give lucrative payouts to shareholders of Universal Music Group. So the system of exploitation is perpetuated in this new paradigm and CEOs, shareholders, and other mega wealthy people come out on top again, and not the creators nor the people the creators would love to have benefit from their creations.
It’s so funny to think about who is actually the thief in this scenario. I really do have a lot of empathy and understanding for the artists who are just pointing at anybody who uses AI and calling them a thief. They’re doing that from a place of fear, and they’re struggling to create art at all. There’s such artificial scarcity–particularly here in the US–and so people are being reactive to that and looking to make change where they can. So they might say, ‘Well, if I can just convince my neighbor that he has to stop AI generating his resume, then that’s one small win.’
From Copaganda to Community Control: Building the Surveillance Tech Copaganda Toolkit
Question: What is the backstory of the toolkit? How did it come to be where it is now, and what was it like to pull it together?
Lia: It’s definitely a labor of love. For a long time, Fight for the Future has been having conversations with our allies about the real harms that narratives can have. For example, representations of facial recognition technology in the media are practically torn from the pages of a PR deck of a facial recognition company. This has led to the public perception of the technology as something used by “Good Guys”–i.e. as what solves the crime, rescues the child, etc. There’s a lack of broad consciousness that a technology, or a law for that matter, is only as good as the people who control it. This is something that we need to reckon with. Many left-leaning people in the U.S. are confronting the idea that law enforcement might not always be the “Good Guys.”
With the toolkit, we are asking storytellers to do it differently. What if we could clap back at this, often unconscious, capitulation to techno-fascism that storytellers are making when they paint surveillance tech as something good? It’s a moonshot because we are part of the tiny digital rights activism space, going up against the greatest concentration of wealth in the world.
The idea of how to do that was something I chewed on for several months. Then I had the idea for organizing a story contest at RightsCon – for stories specifically aimed against surveillance tech or pervasive copaganda.3 I dashed off an email to the RightsCon team and two days later, we were on a call.
This was the moment to unite my writing community with my work in the digital rights space–and create the resources that I wanted to be available for others. Everything that I wished for and wanted to put into this project ultimately came to fruition in a beautiful and expansive way. We had incredible submissions to the short story contest and had the stories published in Strange Horizons, one of the foremost and excellent literary magazines for speculative fiction out there. We had incredible authors and a literary agent agree to be judges of who would get sent to RightsCon. Chris, whom you also have interviewed,4 came to Rightscon, where we hosted massively attended sessions with well over 100 digital human rights advocates. Together, we discussed how surveillance technologies are represented in fiction—exploring both what participants see and what they want to see—through conversations sparked by stories created by the fiction community.
It was a big, year-long conversation. Then a huge push to take all of the notes and to bring them together, paired with the stories.5 Our incredible designer at Fight for the Future did all this amazing art. Then it was published as a special issue of Compost Magazine and also on Filecoin. Mai Ishikawa Sutton with Compost was there almost from the beginning too, and in on this labor of love as a collaborator and co-conspirator without whom I couldn’t have ever pulled the project off. The toolkit is available for free, decentralized, and uncensorable all throughout the world. It’s a beautiful effort by these two communities, one that can hopefully lay the foundation for storytellers to think more critically and dynamically about how they portray technology in their stories and what readers take away from those representations.

Building Bridges: Creating Infrastructure for Cross-Community Collaboration
Question: How have you been able to create spaces that bridge gaps between technologists, activists, writers, and others? Why are those spaces important, and how can we create more of them?
Lia: There is a hunger on both sides–the people working for social change and the people creating worlds where change is depicted before it becomes reality. And, it’s not that difficult to get people to agree to do things–it’s more about building out the support and infrastructure necessary.
With the toolkit, I looked to build an opportunity for as many authors as possible to have an achievement to point to–an accolade for their resume or biography. I wanted them to be able to say, “I was shortlisted for this” or “I was published here.” It is very prestigious to be published in Strange Horizons, so the fact that we were able to leverage our human rights connections to a renowned literary venue meant that we were able to offer more. We could also tell members of the human rights community, “These authors wrote these for you,” and “You’re creating a resource to ask for more of what you want in the world,” which was very compelling to them.
I’ve worked at the intersection of social impact and the arts for a very long time. Any opportunity or any ask that I make of either of those communities needs to be solidly backed by an analysis of real support, meaning, and impact for those who participate. It’s always a question of: How can this be truly impactful based on the goals of the different communities that are involved? Where would they like to get to with their work? For me, that grounding is the biggest missing puzzle piece that I see so often when these two communities try to connect. We need to be truly building for each community and their goals–and not for one at the expense of the other.
Demanding Better: A Vision for Community-Controlled AI Auditing
Question: If you could design one speculative friction to be tested in an AI system tomorrow, what would it look like, and what would it aim to protect or challenge?
Lia: I think that there needs to be some sort of investment in a consortium or auditing body specifically coming from organizations that serve people who have been traditionally most marginalized on the internet (i.e. LGBTQ orgs, religious minority groups, sex work organizations, and others). People who are both most knowledgeable about the harms of our existing online paradigm and most left out of the conversation need to have the ability–or a mechanism built with them–to step in and audit, rank, and score these systems. Or, alternatively, to build our own because there needs to be much broader transparency to interrogate. Maybe that’s possible in the future with open source technologies. A framework from these communities around ‘what these technologies will teach the next generation’ would be a piece that I’d really be interested in seeing.
We need to stop accepting the premise of these AI tech giant trade secret marketing blitzes and start asking for what we want. Because the truth is, they can build anything we want into those systems–if only they were forced to do it. And it’s well past time that we got organized and started asking. And by asking, I mean demanding.
Editors’ note and a call to experience
We’d love to hear from you! Please share your reflections in the comments here. We invite you to respond to our call to experience alternative possible futures for articulating, negotiating, and transforming friction between people and AI systems. How do you envision community-controlled AI auditing? What do you think could contribute to shared infrastructure for cross-community collaboration?
The Overton window is the range of ideas, policies, and discourse that is considered politically and socially acceptable to the mainstream public at a given time. Ideas that fall within this “window” are taken seriously by politicians, while those outside it are deemed too extreme to be viable.
Parent Teacher Associations are organizations comprising parents, teachers, and staff that are intended to facilitate parental participation in a school.
Copaganda is propaganda designed to promote a positive public image of police and police systems. Recent research has been engaging with the implications of inflating trust in police nationally, for example see: Emma Rackstraw. When Reality TV Creates Reality: How “Copaganda” Affects Police, Communities, and Viewers. (July 30, 2025).
Read Chris’s RightsCon story here: A Charm to Keep the Evil Eye Away from Your Campervan, and our interview here: Collective Bargaining Through Code: How Solarpunk Fiction Imagines AI Beyond Corporate Control.
The toolkit includes the discussion corresponding to each fiction story.



