Cautionary Tales and Compliance: Becoming Your Own Filter in the Age of AI
A conversation with AI governance and regulation expert Michael Atleson
What can E. M. Forster’s The Machine Stops or Gibson’s Neuromancer teach us about today’s challenges in AI governance? In this conversation, we interview Michael Atleson, who shares that some narratives found in fiction mirror real-world risks he has seen both as a regulator at the FTC and now advising companies on AI governance at DLA Piper.
Most recently, Michael has written about the broadest AI transparency law within the United States - the new California Transparency in Frontier Artificial Intelligence Act - its requirements and implications. Among other critical challenges, his work centers on regulatory developments – from the FTC, the California legislature, and elsewhere – concerning chatbots designed or marketed for mental health and companionship. The significant increase in regulatory scrutiny of AI companion chatbots seeks to protect the most vulnerable.
In this interview, Michael warns about the dangers of frictionless technology, the importance of creating “speed bumps” that protect consumers, and why, in an age of opaque AI systems, we must all learn to become our own filter.
Editors’ note: This article is part of an expert interview series, which aims to disentangle form, function, fiction, and friction in AI systems. We invite you to engage with the conversation that follows through our call for proposals that reimagine how applied legal fictions help stakeholders navigate and reconfigure tensions in AI’s design, governance, and use.
Enjoy!
Human Agency at the Center of Technology
Question: What motivates your current work? What are the values, visions, and metaphors that are driving what you’re doing today?
Michael: Even though my current job is quite different from my last role as a regulator, one constant motivation for me is ensuring that agency, control, and responsibility remain centered in humans, not machines. I worry about people handing over too much control to so-called agentic AI, and about the overuse of AI in ways that erode critical thinking skills, especially in professionals and students, but also in consumers.
These concerns do not depend on how capable or reliable a tool is. A quote that often comes to mind is from E.M. Forster’s The Machine Stops (1908): “But Humanity, in its desire for comfort, had over-reached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine.” That story could have been written today. It’s a precursor, in some ways, to some of the metaphors within WALL-E, where humans become fat and lazy on a spaceship after Earth is rendered uninhabitable, while machines dictate their daily lives.

Friction in Regulation and Compliance
Question: You’ve worked on AI regulation from both sides, first enforcing it at the FTC and now guiding companies on compliance at DLA Piper. When you think about your work across these roles, how are you thinking about the different types of friction that come into play?
Michael: You can define friction in a number of ways, but I can think of two that came up at the FTC. In my last few years there, I worked in the Division of Advertising Practices.
One issue that comes up all the time in advertising law is what kinds of disclosures companies need to make so that consumers are not misled, either explicitly or implicitly. Whether or not an advertiser uses AI, disclosures in ads need to be clear and prominent so people actually see and understand them. To do that, advertisers often have to go against their own interests and add some kind of friction. It could mean drawing a consumer’s attention to something, making them click on or read something before making a decision. The point is to get the consumer to pause and think at a critical moment. That is one type of friction in a consumer transaction.
Another type of friction I discussed in a 2022 report to Congress was about how platforms use AI or other automated tools to address online harms.1 Platforms can add friction into content moderation, for example, slowing down something that is going viral (what’s referred to as circuit breaking), downranking content, sending warnings, or demonetizing it. Content moderation is extremely difficult and has become a political issue, even though many of these interventions are content-neutral. Still, people worry that platforms are putting their finger on the scale politically.
At DLA Piper, a big issue for corporate clients is how to control AI use within their company. Employees may be using AI tools that are approved or unapproved, “shadow AI.” Both come with risks, and companies need employees to understand those risks and act accordingly. That means creating policies and then training on, monitoring, and enforcing them. All of that is a kind of friction, too. It slows down how people use AI within an organization, just like bureaucracy slows things down in government or anywhere else. It takes resources, but it exists for a reason, and we think it is worth it. That is why we encourage companies to put these governance structures in place.
One example is AI-enabled recording and transcription tools. At the beginning of our conversation, you mentioned using one of these tools. It’s very popular, but it was recently sued in a private lawsuit over privacy violations related to recording conversations that plaintiffs believed were confidential.2 A lot of this comes down to settings and clarity about when recordings and transcripts are created, and what users expect. Beyond privacy breaches, there are also issues with state laws that prohibit recording without consent, as well as risks of exposing confidential business information. Because of these concerns, we are often asked by clients to create policies governing how such tools can be used. These include rules for recording and transcription, and also for less sensitive features like meeting summaries or action items, which should still require a human to check for accuracy. Companies also need to decide where data from these tools will be stored, and make exceptions for protected labor activity, since employees may record meetings to document unfair labor practices, and the National Labor Relations Board requires that they be allowed to do so. All of this means putting policies, training, and enforcement in place. You cannot simply tell employees to use whatever tool they want, even if it seems helpful, because of the risks. These requirements add friction, but they are necessary to protect information and safeguard privacy, and they should also give people the ability to opt out when appropriate.
Risks in AI Marketing
Question: Your recent work on AI-washing deals with companies making false claims about AI capabilities. What role does friction, or the lack of it, play in how companies market AI?
Michael: Ads for AI products, whether aimed at businesses or consumers, often promise to remove some kind of friction. For businesses, some ads explicitly claim that AI can replace human employees. Employers may find this attractive because machines do not get sick, complain, or ask for raises. Even if the output is inferior to human work, the perceived removal of friction in managing employees can be appealing.

There has also been attention, and sometimes ridicule, toward AI ads for consumers. Some of these ads suggest AI can take over personal tasks, like telling stories to your children or helping your child write a letter to a sports hero.3 These examples upset people because relationships, especially with family members, are supposed to involve effort and friction. Learning to navigate that is part of how people connect with each other. So when ads frame AI as a way to avoid those experiences, they feel offensive. To me, it looks like offloading responsibilities that should be meaningful and enjoyable.
Pro-Social Frictions in Consumer Protection
Question: How do you think about the costs and benefits of friction in consumer protection, particularly in AI? Can you give examples of meaningful or pro-social friction that safeguards consumers, and conversely, harmful frictionless experiences?
Michael: In online transactions, both sellers and consumers might benefit from friction. For example, sellers use CAPTCHAs or ID verification to prevent fraud. Mostly, though, sellers want to avoid these kinds of interventions so that they can get to the transaction faster, unless maybe they’re adding friction to stop consumers from canceling a subscription. As consumers, we are conditioned to be very annoyed by extra steps, but consumers don’t want to encounter fraud, either. Thus, it’s worth pushing interventions that make consumers stop and think before they do something they’re going to regret. That is a key focus for FTC consumer guidance, which encourages consumers to add self-imposed friction. Such speed bumps are especially important now, with AI-generated deepfakes and voice clones, which are now ever more realistic and easy for people to create and disseminate.
On the other hand, frictionless experiences can be harmful. As more agentic AI tools enter the market, people will increasingly rely on them for recommendations and decisions about what to buy and from whom. But how will anyone know if they are being steered in a particular direction because of a commercial relationship the operator of an AI agent has, or because a brand has invested heavily in search engine optimization (SEO) to ensure its content is picked up over others? The process is so opaque that consumers simply cannot tell. Some might say the answer is to go back to traditional web browsing and use a search engine, but those are also subject to manipulation through the same SEO tactics. Now, with many search engines offering AI summaries, the problem repeats itself: how can you know that the summary has not been shaped for commercial advantage? Using an agent or reading an AI summary both reduce the friction of searching and comparing, but that convenience comes at a cost. The opacity creates a real risk of harm because consumers cannot see how or why they are being directed toward certain choices.
The Role of Science Fiction
Question: What role do you think science fiction or speculative fiction could play in inspiring new or alternative futures for AI governance that are grounded in a sense of responsibility?
Michael: I see science fiction as a form of “what if” storytelling. Many stories are grounded in present trends and ask what happens if they continue. Because science fiction is so widely consumed, it shapes our imagination. It can help us picture not only AI-related dystopias but also ways to avoid them.
Some tech leaders have drawn the wrong lessons from dystopian fiction, which has led to cynical jokes like “we created the torment nexus, inspired by the classic SciFi novel Don’t Create the Torment Nexus.”4 That is why responsibility matters. I think a good framing, which is often present in SciFi novels, is to see AI or other advanced technology as not something that happens on its own, but because of choices that people make.
Cory Doctorow put it well when he said, “Good science fiction isn’t merely about what the technology does, but who it does it for, and who it does it to.”5 This framing reminds us that technology results from choices made by people, often those in power. Seeing AI this way can inspire governance approaches that prioritize responsibility and human agency.
Learning from Fiction in Regulation
Question: Your “Luring Test” article opened with a reference to Ex Machina, where a robot manipulates humans through emotional engineering. How do science fiction narratives influence your thinking about AI regulation and the ability to anticipate regulatory challenges?
Michael: Science fiction helps frame issues of AI regulation as being about companies and people’s choices, not the technology itself. Regulators cannot sue an AI system any more than they can sue a refrigerator. They hold companies accountable for what they build, how they build it, and who benefits or is harmed.
The what if question that fiction writers are asking is a good thing for regulators to ask, too. Some companies want us to focus on the shiny object, right? That is, they would prefer maybe that people think about it as magic or as inevitable, not something that they have designed in a particular way and are selling in a particular way based on their choices and their profit motive.
Arthur C. Clarke famously said that “any sufficiently advanced technology is indistinguishable from magic.” Many AI tools feel magical to people, and companies know that. Some companies exploit this perception by designing anthropomorphic chatbots or humanoid robots, even when it is impractical, because it draws investors and plays into people’s imaginations.
Isaac Asimov’s famous Three Laws of Robotics began with the rule that a robot may not injure a person, or through inaction, allow a person to come to harm. It would be good if robotics and AI companies adopted a similar first principle: we should not build systems that cause more harm than benefit. Another example comes from William Gibson’s Neuromancer in the 1980s, a landmark cyberpunk novel that not only anticipated aspects of the internet but also imagined a regulatory force called the “Turing Registry.” In the story, Turing Police agents traveled the world enforcing laws against unregulated AI systems. This was written forty years ago, long before anyone was seriously discussing specific laws to govern artificial intelligence. Looking back, it is striking. And it makes me wonder how people decades from now will read the speculative fiction being written in the 2020s. They will ask not so much who predicted the technology correctly, but who correctly anticipated the issues society would face because of it. That is the real strength of speculative fiction - it can help us see around the corner, which is exactly what regulators also need to do.

Designing Friction for Critical Thinking
Question: If you could design one speculative “friction intervention” to be tested in an AI product or service tomorrow, what would it look like and what would it aim to protect?
Michael: One idea comes from the notion of becoming your own filter. In a 2016 documentary, “Lo and Behold,” physicist Lawrence Krauss said, “The internet will continue to propagate out of control, no matter what governments or businesses do, and becoming your own filter will become the challenge of the future.” That idea still resonates. Ultimately, you can’t rely on anyone else or anything else to be literate for you, to get back to AI literacy, and in this age, it’s about the need to pause before making decisions or letting things happen to you.
It is important for people to develop literacy about AI — understanding how these systems work and hopefully maintaining agency in their interactions with AI. There is a growing body of books and articles on this subject. A recent example is Robin Hood Math by mathematician Prof. Noah Giansiracusa, written for a general audience. The book explains what an algorithm is, how it affects different aspects of life, and why people should be aware of those impacts. With that awareness, individuals may be better able to exercise agency they might not otherwise realize they have.
I worry about how AI tools can erode critical thinking. For example, a recent study looked at endoscopists, very experienced doctors who perform colonoscopies to detect cancer. Their performance was measured three months before and three months after the introduction of an AI tool. The study found that once the tool was taken away, they were not nearly as effective at detecting cancer.
There’s another study from the MIT Media Lab that looked at what happens in people’s brains when they write essays with AI assistance. Participants were split into groups: some wrote using ChatGPT, some relied on a search engine, and others wrote without any external tools. Researchers measured their brain activity with EEG. The striking finding was that those who used AI showed much lower levels of neural engagement compared to the other groups. Even more telling, when people who had been using AI switched back to writing without it, their brains stayed quieter and their performance suffered. The researchers described this as a kind of “cognitive debt,” suggesting that relying too heavily on AI can dull our ability to think critically and creatively on our own.
I think enterprises need to give much more thought to how they introduce AI tools for clinicians, professionals, teachers, or students. The short-term efficiencies gained by removing frictions in their work can easily lead to overreliance on machines that we know are far from perfect. There is also a question of timing: when and where should friction be added to ensure that people remain engaged and exercise their own judgment, rather than simply deferring to the tool?
Most likely, a useful intervention would need to involve someone pausing and taking an extra step before accepting an AI tool’s recommendation. It should be more than rubber-stamping or checking a box — it should actively engage their critical thinking skills in whatever context they are working in. From a legal perspective, professionals such as doctors or lawyers will ultimately be held responsible for the decisions they make. If they are wrong, and there is legal liability, they cannot simply say, “the tool told me it wasn’t cancer” or “the tool said my client wouldn’t face a lawsuit.” They are responsible for the outcome, which is why it is important to remind them that their responsibility and credibility are on the line.
Editors’ note and a call to experience
This article is part of an expert interview series, which aims to disentangle form, function, fiction, and friction in AI systems. We invite you to engage with the conversation through our call for proposals that reimagine how practical legal fictions help stakeholders navigate and reconfigure tensions in AI’s design, governance, and use.
Read the report: Combatting Online Harms Through Innovation. It remains highly relevant to today’s AI systems and outlines a range of potential interventions or “frictions” such as circuit-breaking, downranking, labeling, adding interstitials, sending warnings, and demonetizing bad actors.
For example, there is a recent class action lawsuit alleging that, by default, Otter does not ask meeting attendees for permission to record and fails to alert participants that recordings are shared with Otter to improve its AI systems. Read more in coverage from NPR.
In Google’s controversial ad, a father uses their AI model to help his daughter write a fan letter to Olympic track star Sydney McLaughlin-Levrone. This is one of the most cited examples showing how using AI as a shortcut in emotional or relational communication draws strong resistance.
Quote from his lecture “With Great Power Came No Responsibility”, Feb 26, 2025


