Introducing a new Blog Series: Speculative F(r)iction in AI Use and Governance
A space for sharing stories, articulating, disentangling, negotiating, and transforming friction in our interactions with AI systems and their context.

A paradigm doesn’t shift on its own
If you are reading this, chances are you are thinking about AI systems a lot. Maybe you are a researcher, engineer, designer, product manager, or policy maker. Maybe you are working on open-ended socio-technical challenges, or working for a company that builds generative AI solutions, or leading a product team that seeks to understand and mitigate risks for users.
No matter who you are, you’re probably noticing that AI is moving fast. There is an astounding wave of hype, uncertainty, and a sinking realization about the lack of a safety net. The social structures we rely on can’t evolve at the speed of AI innovation.
And yet, you see a crossroad, a mandala, a pluriverse of choices ... or maybe you don’t, but you know that other pathways are possible.
We are seeing new emerging capabilities and innovations made possible with AI every day. It has become the economic engine of the present, and yet, there is a need for new visions and thought experiments that could evolve our understanding of AI systems as not just tools for thought but partners in thought.1
I am paying attention to AI research and innovation, making possible what might have looked like science fiction less than a decade ago, while resisting the strive for frictionless technology that might limit critical thinking and human agency.
First, the field of speculative design is my entrypoint to creating and curating thought experiments that could serve the purpose of articulating, negotiating, and transforming existing frictions in AI adoption and governance. For example, to inspire design fictions that can broaden our understanding of what’s possible, such as the Terms-we-Serve-with2 and the AI agent social license.3
Secondly, my goal for this newsletter is to write about a cognitive science-inspired understanding of the value of friction in human decision-making.4 Friction forces us to come together as a team - to move forward together.5 Thus, a more granular and nuanced understanding of “the frictions we want” in building, interacting with, and regulating AI could help us understand diverse viewpoints, hold conflicting views together, and find synergies that create more than the sum of the parts.
Form, function, fiction, and friction in AI
Form follows function is a foundational concept in design that explores the relationship between how something looks (form) and what is its purpose and intended functionality (function). Affordance is another foundational concept that refers to the inherent properties of an object or interface that suggest how it should be used.6 With the growing proliferation of generative AI, you have also seen and probably experienced the limitations of the affordances of a chatbot-like conversational interface. Another challenge is the lack of explainability or justification of chatbot answers. Rarely do we get additional supporting information about an AI outcome, and we are left with the responsibility to verify it on our own. While harmless in many scenarios, over-reliance on AI outcomes could have critical implications in high-stakes scenarios. Social psychologists have coined the so-called automation bias to describe the tendency to over-rely on automation. Yet, generative AI is often presented as if it can do everything. How could we know the limits or what it can actually do? And when we unintentionally discover those limits, we are inevitably going to experience some kind of friction - things break, there’s confusion and frustration, and we might often think that it is our fault and not the AI system's. There’s not much we can do in those situations.
By studying the dynamic spectrum of form, function, fiction, and friction in our interactions with AI systems, we gain a better understanding of how they impact our human experience. Only when we have identified those implications, we get an opportunity to shift them in ways that are better aligned with our goals. Further, to explore what “alignment” means, we take inspiration from the field of speculative design and create space for questioning underlying social norms and assumptions.
Zones of friction are interesting to me. Friction can happen not only when things break, it can also be intentional to prevent things from breaking. What if we could make the safe things easier (less friction) and the high-risk things harder (more friction)? Thus, characterising and measuring aspects of possible frictions becomes a powerful tool for improving human agency and positive outcomes.
Think of friction in the context of interactions with generative AI as a deliberate design element that adds an extra step, a nudge, a microboundary, contributing to effort, i.e., what’s called cognitive load or cognitive forcing. Why would you want that in a world where we do everything we can to avoid friction? Because intentional interventions create the space for identifying and mitigating errors or potentially negative outcomes, such as sociotechnical and security risks or unintended consequences.
The goal is not to create friction for its own sake but to create space for critical thinking. Thus, what is initially perceived as friction can serve other design goals, such as:
Aiding the evaluation and mitigation of AI risks
Fostering appropriate reliance on AI-generated outputs
Encouraging reflection and finding common ground
To be efficient and helpful, friction interventions need to be tailored to the specific context, taking into account the mental models of users and the scale and material implications of AI risks or errors. Explainability and interpretability are growing research areas at the forefront of AI innovation, which engage with the questions of making AI system outputs easier for people to understand and verify at different levels. Friction interventions could help users get a better intuition into an AI system’s computational “mental models” of operations and know when to trust it. Furthermore, by understanding how an AI system fits within operational workflows, developers can start to understand the mental models of end users. Our human mental models of how we expect an AI system to work or fail and the computational models of how the AI explains its decision-making will inevitably interact and depend on one another.
What to expect
Friction doesn’t resolve on its own. My intention for this newsletter is that it becomes a living archive and a space for weaving paradox into synergy, grounded in a deep sense of collective response-ability in AI. A space to share -
Expert interviews with people working in the trenches of transforming friction.
Artistic interventions and design fiction prototypes that open new opportunities.
Stories from my personal experience of friction fixing in AI use and governance.
Learnings from books and research papers that explore these topics.
Year in Review
It has been a year since the community call, following a public workshop where almost a hundred people came together for the launch of the Speculative F(r)iction in AI use and Governance living archive.7 The goal was to co-create constructive friction and design fictions that could improve transparency, evaluation, and human agency in the context of generative AI systems. Through a speculative design exercise, we explored how the world would be different if AI agents were required to have a license to operate, conceptually similar to driver's licenses. I got to evolve the concept further by presenting it at the Raw Seminar at Data & Society Research Institute.8 It was a fertile discussion focused on the application of operational licenses in healthcare and what a prototype could look like. I got to organize and co-facilitate a few more workshops at places like the Internet Archive DWeb Camp, where the focus was on design friction in the form of AI safety guardrails. My work on consent mechanisms and licensing brought me closer to collaborators in the legal space, and I took a leap, joining the Responsible AI Testing team at DLA Piper.9 Our team works on evaluation, validation, and assurance for AI systems in domains like healthcare and hiring. The last year has been transformative and allowed me to collaborate with legal experts at the cutting edge of AI, helping businesses innovate while meaningfully considering the growing regulatory and socio-technical implications and risks.
The Bigger Picture: Why It Matters
Paradigm shifts are about being able to sit in discomfort. My theory-of-change is that expanding the creative capacity for constructive frictions empowers people through improved AI literacy, understanding, and choice over how AI systems impact nearly every aspect of our experience.
The possibilities are infinite. Programmed inefficiencies that wake us up from doom scrolling and trigger reflections about what it is that we really want. Speed bumps that keep our digital neighborhood safe, improving safety and accessibility to public infrastructure for all. Signposts to inspire and enable learning, critical thinking, cognitive engagement, and mindful interactions with AI products and services. A zone of friction, expanding our capacity to hold conflicting views together and ask deeper questions.
Do you want to be involved? Who would you want to hear from? If you would like to suggest a possible topic or otherwise connect with me, please get in touch or schedule a time to connect with me here.
Acknowledgements: Thank you to Brandon Jackson and Zachary Schlosser for your helpful comments and discussion. Thank you to Mozilla Foundation for supporting this work during my Trustworthy AI fellowship. Thank you to all close to 100 participants of the online workshop I organized a year back. If you enjoyed the article, please share it around, I’d appreciate it a lot. Thank you for reading! I’m beyond excited to see what we can build together!
Collins, Katherine M., et al. "Building machines that learn and think with people." Nature human behaviour 8.10 (2024): 1851-1863.
The Terms-we-Serve-with is a speculative alternative to Terms-of-Service agreements which we got to prototype and implement in community. See our research paper and case study here: Rakova, Bogdana, Renee Shelby, and Megan Ma. "Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm." Big Data & Society 10.2 (2023): 20539517231211553.
The AI agent social license is a design fiction we co-designed through a public speculative design workshop I facilitated during my Mozilla fellowship. Read about it here: Building Positive Futures for Generative AI Adoption in Healthcare
Cognitive conflict refers to the mental discomfort we experience when new information challenges our existing beliefs, attitudes, experiences, or habits. It plays a vital role in decision-making, often prompting us to move from automatic, habitual responses to more deliberate and conscious choices. See: Houdé, Olivier. 3-system theory of the cognitive brain: A post-Piagetian approach to cognitive development. Routledge, 2019.
Robert Suttan and Huggy Rao write about the art and science of friction in the context of organizational structure and culture in their book “The Friction Project.” In 2020 I got to explore this through a research project that led to our article - Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-23.
In her book “How Artifacts Afford. The Power and Politics of Everyday Things,” Jenny L. Davis shifts the question from what technology interventions afford to how they do that, for whom, and under what circumstances.
Read about it and watch a recording here - https://speculativefriction.org/events
The Data & Society Research Institute advances public understanding of the social implications of data-centric technologies and automation. Learn more about their work here.
Learn about our work on Red-teaming AI Systems in Healthcare. In collaboration with the Health AI Partnership, we are exploring the critical role of red-teaming as an AI evaluation strategy to contribute to the effective, safe, and equitable integration of AI in healthcare.