Legal Fiction as Institutional Imagination
A call to experience (extended)
What if we could reimagine legal innovation as institutional imagination that enables and safeguards the safe adoption of AI in society? How could we engage with fictional narratives to explore that question in a way that leads to taking action, strategy, and leadership in the present moment?
My use of fiction is to explore the possibility of friction interventions, such as prosocial design choices, nudges, and intentional interventions that create space for reflection, understanding, human oversight, and alignment between human and algorithmic reasoning. The goal is not to create friction for its own sake but to create space for critical thinking. Thus, what is initially perceived as friction can serve other design goals, such as aiding the evaluation and mitigation of AI risks, fostering appropriate reliance on AI-generated outputs, and encouraging reflection and finding common ground. You are invited to contribute.
What is legal fiction?
Legal fiction is a construct in law where something is treated as true or real for legal purposes, even though it may not be factually so. It functions as a pragmatic device that allows the law to adapt to complex realities without rewriting existing doctrines. For example, the legal notion of a corporation as a person enables it to own property, enter into contracts, and be held accountable in court, even though it is not a human being. Similarly, the concept of environmental personhood extends the idea of legal personhood (traditionally applied to humans and corporations) to rivers, forests, or ecosystems in Nature, by granting them certain legal rights and standing in court. In this way, environmental personhood operates as a juridical device that enables new forms of environmental governance and accountability.
In the speculative realm, the Terms-we-Serve-with and the AI agent social license to operate contracts are examples of legal fiction prototypes I’ve contributed to.
In her new book “Artificial Humanities: A Fictional Perspective on Language in AI,” Nina Beguš argues that “fiction is the new laboratory for testing, evaluating, and developing AI.” In what follows, I share some examples from research, science fiction, and real-world projects that have inspired me, and I hope they might open up new space for curiosity.
Institutional Imagination
Institutions such as governance, regulatory frameworks, and rights infrastructures are integral to the technological ecosystem, and not an afterthought. In Governing the Commons (1990), Elinor Ostrom argued that: “No single, idealized set of rules is universally good for all commons. A core goal should be to encourage institutional diversity.” Her body of work centers on institutional diversity, polycentric governance, and “the need to expand the repertoire of institutional arrangements to cope with varied collective-action problems,” explicitly challenging the idea of “only one correct institutional form.”
Furthermore, I draw from Roberto Mangabeira Unger’s work on broadening the illusory boundaries of legal analysis (1996), which I discovered in conversation with Prof. Andreu Belsunces. Andreu and I had a great conversation, which I’m about to publish soon as well. His most recent work explores the performative agencies of fiction in technological development. Institutional imaginations refers to a capacity within legal and political thinking to envision, design, and argue for alternative institutional arrangements, rather than treating existing institutions as fixed, natural, or inevitable.
More recently, a couple of months ago, Alondra Nelson wrote about this year as the season when the unheeded risks and dangers of AI became undeniably clear. She draws on lessons from genetics to govern algorithms, proposing a more integrated approach. Specifically, she centers the Ethical, Legal, and Social Implications (ELSI) program established by the leadership of the Human Genome Project in 1990, a kind of institutional reimagination that could lead to potentially radically different outcomes in AI governance.
Fictional Perspectives
Perhaps one of the most influential fictional legal frameworks is The Three Laws of Robotics by Isaac Asimov (1942). They state that a robot may not harm a human, it must obey orders, and must protect its own existence unless this conflicts with the first two. The framework has inspired modern AI safety principles and human-in-the-loop design ethics. It has been quoted in EU AI Act debates and early IEEE AI Ethics standards.
The Ministry of the Future concept by Kim Stanley Robinson (2020) reimagines climate governance through a UN-chartered institution empowered to act as the advocate and proxy for future generations and the Earth’s ecosystems. The Ministry operates in a liminal space between law, diplomacy, and moral authority, wielding tools ranging from international carbon-backed currencies to refugee protections and geoengineering oversight. By granting formal representation to those who cannot yet speak—future humans, nonhuman species, and destabilized climate systems—the novel envisions a radically expanded conception of legal standing. It suggests that effective climate action requires institutions designed not for political convenience but for planetary stewardship, creating new forms of accountability, legitimacy, and intergenerational responsibility.
A huge number of Black Mirror episodes explore fictional legal frameworks. For example, in the “USS Callister“ episode, DNA-derived digital consciousnesses can be copied and used as corporate property. In “The History of You,” memory becomes a legal file, not a subjective experience, and in the “Be Right Back” episode, dead people’s digital traces can be used to reconstruct them as chatbots or synthetic bodies. In fact, a US court recently had AI voice a victim’s impact statement.
An uncountable number of artists have explored legal fictions in their own works. I give only a couple of examples below, and would love to learn more from you!
The Treaty of Finsbury Park is a speculative, participatory artwork by Furtherfield and The Design Informatics Studio that imagines a near-future London in which humans, plants, animals, and AI systems co-govern public space as equal political actors. The treaty emerges after a fictional “interspecies uprising,” formalizing a multispecies democracy where pigeons, foxes, trees, mycelial networks, and algorithmic agents all gain recognized rights and representation. Through rituals, ceremonies, and co-designed governance protocols, the project playfully yet critically explores what legal and civic life might look like when ecological and technological beings are granted standing in urban decision-making. As both narrative and prototype, the Treaty invites participants to reimagine citizenship beyond the human, challenging extractive legal norms and gesturing toward more entangled, caring, and ecologically grounded forms of governance.
The Ecological Intelligence Agency is an autonomous inter-departmental government agency that encompasses an assemblage of localised AI models, all of which advocate for ecological flourishing. Co-developed with and informed by the wisdom of local stewards, the agency has been intentionally developed for inter-relational use, bridging sectors, departments, and communities to help build a (partial) picture of and communicate our entangled interdependence with the world around us. It is a project by Superflux.
A Call To Experience
What are the speculative fiction narratives, artworks, experiments, or other provocations that inspire you in thinking about practical legal fictions that illuminate, exaggerate, or transform tensions in the law, policy, and governance of AI systems? How could they contribute to resolving unconstructive friction or creating mindful frictions that empower positive outcomes? Learn more and share your perspectives here!


