The Case for Cognitive Speed Bumps: How Friction-in-Design Can Humanize AI Interaction
A conversation with legal expert Brett M. Frischmann about deliberate design choices and human agency in sociotechnical AI systems.
How many times have you fallen into a doom-scrolling trap, only to discover that time has vanished? As AI systems become deeply embedded in our daily lives, the push for speed, efficiency, and seamlessness often overshadows our ability to pause, reflect, ask deeper questions, and even remember why we began interacting with an AI system in the first place.
In this conversation, we speak with Brett M. Frischmann, a Professor in Law, Business and Economics at Villanova University. Brett is one of the organizers of a recent workshop on Stimulating cognitive engagement in hybrid decision-making: Friction, Reliance and Biases, which delves into the role of frictional protocols in AI. Together with Susan Benesch at Harvard University, he explores the concept of friction-in-design regulation, proposing that deliberately introducing delays or obstacles into digital interfaces can be a form of content-neutral regulation, similar to traditional time, place, and manner restrictions on speech. In another collaboration with Paul Ohm at Georgetown University, his work explores governance seams, illustrating how socially constructed boundaries, borders, and interfaces can leverage friction-in-design to enable effective governance.
We discuss reflections from his most recent work at the intersection of AI design and policy. Brett emphasizes the importance of introducing pro-social friction—deliberate design interventions that encourage reflection, slow down decision-making, and enable people, especially groups, to act with greater awareness and intention. He critiques the prevailing mindset in online and AI system design that treats friction as inherently bad, instead proposing that thoughtful resistance (like cognitive speed bumps) can support meaningful human interaction and decision-making.

Editors’ note: This article is part of an expert interview series, which aims to disentangle form, function, fiction, and friction in AI systems. We invite you to inhabit the adjacent possible worldviews and engage with the conversation that follows through our call to experience it.
Enjoy!
The Problem with Frictionless Design: Values, Visions, and the Need for Speed Bumps
Question: What motivates your current work? What are the values, visions, and metaphors that are driving what you're doing today?
Brett: My book Re-Engineering Humanity was centered on diagnosing problems such as frictionless design, seamless interconnections, and what happens when we’re optimizing system design for the wrong values.
What motivates my recent work is to think about pro-social friction as a response to these problems. It is a way to better enable people working together to flourish as human beings, to accomplish their ends in a more meaningful way. It empowers people to break out of the scripts of a conventional mindset that centers convenience, speed, immediacy, and satiation. The vision is to design better digital infrastructures that incorporate the right kinds of speed bumps to foster reflection, agency, and meaningful engagement.
Defining Friction-in-Design
Question: What does friction-in-design mean in the context of how we interact with and govern AI Systems?
Brett: For me, thinking about friction is a way of thinking about resistance costs that often serve one of many different pro-social instrumental functions.
Friction is not about slowing things down just for the sake of it. You slow things down to give people time and opportunity to think for themselves, to identify alternative courses of action, or to consider the consequences of their actions. Slowing things down serves an instrumental function.
In the offline world, there are decades of research in fields such as civil engineering and urban planning that study the value of different kinds of frictions in the design of systems in the built environment. Different types of friction serve different purposes, and we analyze and deploy them with insight and deliberation.
Online, we’ve pursued a logic of optimization—efficiency, productivity, speed, increased scale and scope—as fast as possible, with very little time to think. It seems that we’ve grown accustomed to thinking that friction is always costly but not beneficial. A friction-in-design approach rejects that mindset and set of design criteria.
It reinvigorates, both conceptually and normatively, the idea of slowing down: why it matters, how it could be achieved, and what the implications are for having meaningful interactions. Offline, we take it for granted. Online, we don't even pursue it. Friction-in-design is about changing the underlying mindset by demonstrating how things could work differently.
AI systems are socially constructed. Friction in AI is not going to work the same way as it does in the offline world. It requires a different kind of design. For example, consider speed bumps on residential streets. Offline, we get speed bumps selectively deployed in certain kinds of neighborhoods to serve certain kinds of purposes. On highways, we don't use speed bumps but other traffic calming measures. Similarly, no one argues for speed bumps every five yards, and no one in the online world is saying that we need friction-in-design in every single interaction, all the time, constantly. The most interesting challenge is understanding and navigating the difficulties of figuring out how, when, and in what form we can build meaningful pro-social friction interventions.
From Cognitive Bias to Cognitive Engagement
Question: What is the role of friction in cognitive engagement in hybrid human-AI decision-making?
Brett: All kinds of decision-making systems, such as creative processes, knowledge production, and learning itself, require different types of friction. Sometimes the friction is simply raising the salience of considerations that would otherwise fall beneath the cognitive radar.
Consider hybrid human-AI decision-making systems deployed in a medical context (for example, AI tools that assist clinicians in taking medical notes during a patient’s visit). We might want to fight automation bias and ensure the human is leveraging their actual expertise. We don't want to de-skill experts when their expertise contributes to the final decision.
There are protocols designed to surface different kinds of data so that people can see what kinds of decisions they're making and what kind of expertise to lend to the AI. Fedrico Cabitza, Chiara Natali, and their collaborators have published some incredible work on this. In these situations, friction can align the human decision-making with the AI system people are engaging with, often resulting in a productive interaction. For instance, in the context of informed consent, a chatbot could translate legalese in Terms‑of‑Service agreements to help people understand the terms of their interaction with an AI system.
Our Frictional AI workshop is about stimulating cognitive engagement in hybrid human-AI systems. It assumes that we’re thinking of AI systems through the lens of extended cognition and cognitive systems. We explore AI alongside other physical and built systems, i.e., AI systems as tools that are part of our cognitive systems: the way we think, how we think, and who we think with. By “hybrid,” we didn’t intend to mean symbiosis that turns someone into a cyborg. It’s about different AI systems as different kinds of tools that are more or less appropriate when humans, collectively or individually, are making decisions or engaging in different kinds of thinking.
Do you see AI systems as thought partners?
Brett: I think it's a mistake to conceptualize AI systems as partners, as if they have a will of their own. I don’t think of my pet, my phone, my calculator, or the temperature knob on my stove as partners. I don’t think of any AI system as a partner. The moment we anthropomorphize technology—as if it were a person—we attribute will where none exists. In hybrid human‑AI systems, I constantly remind myself that AI components are built tools performing certain functions, often incredibly useful, but they are not and should not be treated as persons.
Speculative Fiction and Path Switching
Question: What’s the role of speculative fiction? Do you think frictional protocols could create space for imagining alternate pathways?
Brett: There’s tremendous value in imagining things that don't currently exist, not because they shouldn't or couldn't exist. We need to acknowledge that the current market incentives or political environments steer us in a different direction. This is exactly where I think speculative design fiction can contribute.
As I explore in my book, there’s nothing inevitable besides entropy. While we don’t have to go down a certain path, huge monumental forces are pushing us in a certain direction, collectively, on a whole planet-wide level, but also in our individual lives.
The first step is to wake up, think about why we’re on a given path, what it looks like, and where it’s going. The hard part is imagining another path. How do you step off? And how do we collectively move in a different direction?
That’s where the speculative fiction–friction connection is fascinating, because path switching requires overcoming a huge amount of friction.
Economics talks about sunk costs—costs already incurred and unrecoverable. There are sunk costs stacked against change. Still, opportunity opens when people imagine something is possible that they never thought about before. The challenge is showing a viable path to get there. That’s the empirical work we need.
How do you hope this conversation on frictional AI design could influence AI use and governance?
Brett: I hope that people realize that nothing is inevitable. They should create friction in their own lives to give themselves opportunities to stop and think about what they're doing, especially when interacting with AI systems.
AI is not your friend. AI doesn't have a will. If it's mimicking empathy, that should scare you, not reassure you, because it's been designed to mimic something to achieve an outcome that is not necessarily one that you care about.
We can design friction‑full systems that work. On the internet, we democratized speech—it’s cheap, fast, and easy. Everyone has a platform. And yet, look at the world we live in. No one on this planet should have instantaneous ability to speak to millions of people. There needs to be friction.
In AI, we’re seeing students turn to ChatGPT and other LLM‑based tools to help write college essays and assignments. Is anyone really surprised this could cause atrophy of existing skills or prevent certain skills from developing? I hope friction‑in‑design and speculative friction interventions can help people realize that you can use AI—but more deliberately, and at a pace that makes sense, instead of always, everywhere, and as soon as possible.
Editors’ note and a call to experience
We’d love to hear from you! Please share your reflections in the comments here. We invite you to respond to our call to experience the reimagination through near-future speculative fiction short stories. What speed bumps might you want to see in your own digital life?
…

