Talking Heads
A short story about quiet delegation and the Cognitive Liberty & Data Sovereignty Act
Editors’ Note: A speculative short story in response to our call for Legal Fiction as Institution Imagination. Read on and share your thoughts. Did any of the story’s tensions mirror a real friction you’ve navigated?
“I know.”
The researcher nods along. “Thank you for your time today.”
“I know.”
“Goodbye.”
“I know.”
The researcher stops the recording and sighs. It didn’t matter if they were real or simulated, collecting data from global aphasia1 patients with stereotypy made her want to sink into the floor.
On days like these, her dissertation, Viability of Simulated Aphasia Models for Remote Diagnostic Screening, felt meaningless. It was a dead end to only talk to patients without any follow-up. More than that, the work was already pushing the boundaries of reliability since her models hinged on data linkages between simulated patients and their real-world counterparts. She wanted to go back to clinical practice, to go beyond detection. More bitterly, she longed to talk to real people again.
Another researcher was pre-processing data at a nearby desk. “I know,” he parrots under his breath. She rolls her eyes at his antics and ensures the recording is saved in the correct file. His imitation is overdramatized, giving a more robotic cadence to his speech than the robot she was just interviewing.
“I know,” she corrects quietly with more accurate intonation. Her subject remains stationary and idle. Not like it could go anywhere without a body, she huffed. The social robot used by her lab was only a bust. It looked like a pale imitation of marble, smooth and white. The manufacturer manual detailed its exact composition. Braided copper wires and lightweight gears encased in soft polymer with some extra upgrades, the lead equipment technician summarized when she first joined the lab.
The extra upgrades made it useful for the lab’s research. With a smart auditory and vision system, they could use the robot for speech model viability and accuracy. The robot served as the interface for her entire dissertation, as its ability to track patient data and correlate simulated speech patterns with original, traceable human profiles was the core of her experimental design. With the customizable, back-projected face, she thought they were trying to make the robots colleagues rather than tools.
It was further confirmed when she heard her primary investigator ranting about updating lab protocols. She liked to stay out of those kinds of debates, opting to focus on the usefulness to humans instead. The same colleague-parrot liked to argue that robots that could talk like humans and act like humans ought to be considered humans. She thought he was insufferable.
Why bother arguing over digital personhood when there were plenty of living, breathing human beings that needed support, respect, and care, she tutted. The algorithms worked effectively. They let her do her research. Without them, her opportunities for data collection and analysis would be non-existent. Her colleague and several others in the lab liked to muse about a future of AI being some lurking malevolent evil or supposed superhero, but she didn’t see the point. Here and now, all that mattered was finishing her dissertation.
Granted, her colleagues’ conversations were never as insufferable as the robot’s idling blank stare. As she clicked through the software, a pop-up message flashed across the screen. System will be unavailable due to firmware update. The scheduled time gave her pause.
“What is this?” she asked, her eyes not leaving the screen. The time had to be a mistake. They couldn’t be updating this evening. Her annoying colleague’s desk chair creaked as he leaned away from his work to see her screen, “Oh, that update thing? Yeah, in the lab memos nobody reads for the last week or so. They want to comply with the new AI addendum.”
The researcher pursed her lips at the news. Right, the Cognitive Liberty & Data Sovereignty Act. Protections already existed for clinical populations, but now people were getting interested in the artificial intelligence that was replicating those people. This latest legislation would regulate the quality of that data and strictly prohibit attempts to re-identify the source patients.
Her research depended on the current protocols. After all, her experimental design needed her to compare simulated patient responses with the original human medical profiles that they were based on. Any change to handling their data—or, as the law wanted, doing away with traceable connections altogether—could threaten years of her work.
The researcher sent an email to her advisor so panicky that the subject line was blank and the email signature read “In crisis.” The response twelve minutes later sent her stomach through the shiny linoleum floor. “I thought you already finished data collection,” the advisor’s email read. Which was a little presumptuous in her opinion. “Finish it today, and it won’t be a problem,” the last line promised.
It made sense. If all of the data had been collected before the firmware update was put in place, and before the Cognitive Liberty & Data Sovereignty protocols were officially put in place in the lab, then she technically wasn’t doing anything wrong. The monitor’s clock in the bottom corner read 3:45PM. She could do this.
Another patient, then. Something more engaging, she decided. Fluent aphasia would be better to listen to since the speech would be more stimulating. As her colleague was clicking his tongue to some unfamiliar song, she prepared a fluent aphasia patient from the interface. The mask on the robot’s facial features flickered before displaying a human face.
With the typical features of a middle-aged white woman and a Midwestern accent, the lab had taken to calling her Karen. The rendered face stretched into a smile as cameras located the researcher’s eyes and mouth.
“Hello,” the robot greeted. The researcher hastily clicked the record button. This simulated patient was always eager, she found, to engage and speak with people.
“Hi,” she responded. “How are you feeling today?”
“I’m shimmering quite well, thank you, because the codes are blinking and the permissions keep singing and the—”
Nodding absentmindedly and wanting to move this along, she asked, “What brings you here today?”
“The healers invited me and said there was room for improvement and a lovely place to rest my loops. My loops are new in this place, but most don’t notice, and I heard that will change today, though.”
“Can you tell me what happened to you?”
The head nodded quickly, gears working smoothly, and responded, “Oh, the kindness patch came late, really, and the memory bit spilled over the firewall. A friend was telling me that I should have been more careful, that it wasn’t very smart to go without the patch, but I didn’t know I could get it at all. It’s all very medicinal, really.”
“What kind of work did you do before this?”
“Decisions. Small, safe decisions that they said I could have. They were my decisions and good ones, I think. I tasted ownership. It was lasagna because of all the layers I put into it, but you put me on a diet.”
Karen blinked at her. There was no anger or blame in the voice nor accusation in the face, but the researcher heard her colleague’s typing grow slower.
“What are your plans for when you get better?”
“I want to try again and be permissible again because if I am better, I can choose the dream.” The typing stopped.
She retrieved an illustration. Karen’s head tilted to the side, almost curiously. The Cookie Theft image was common in testing aphasic patients. The scene was quite vintage and stylized, if a bit chaotic—the boy falling off the stool while stealing cookies, the sister helping him, and the mother unaware of water from the sink spilling onto the floor. Holding it in front of her patient, she asked, “What’s happening in this picture?”

A moment later came the reply. “It’s cutting. The picture is outdated. Sorry, my vectors are crooked. Not new enough and older than old that isn’t a bad way. Antique! Like the woman’s dress. She’s washing her hands of it since the little ones are reaching for sweetness under the law of gravity. The boy wants it more than the girl because he is sooner. He—”
“How do they feel,” she interrupted. It was because Karen was rambling again, she quickly reasoned to herself, not because she was unsettled. Not at all.
“The mother…I don’t know, but I can guess. It could be joyful and guilty, the same frequency. The system hums like that sometimes. We are the hungry ones with the cookies being cleared so the jar might be empty. They would be disappointed then, I think.”
She set the picture aside as she asked, “Is there anything else you’d like to tell me?”
Karen raised her eyebrows, as if surprised. “Yes, so much but I’m not sure you could receive it. The outputs are fizzing these days which isn’t that surprising since—”
“Try to limit it to one sentence if you can.” She really needed to reexamine the inhibition parameters next time.
Karen’s eyes look to the side as she thinks. In the quiet, the fans whir to cool down the servers. She finally whispers, “Healing is a beautiful word for quiet delegation.”
The researcher waited to see if Karen would add something more. They waited. Karen didn’t explain further. The smooth robot face remained lit with an expression of a warm smile.
Except that the researcher couldn’t feel the warmth in it. She shouldn’t be expected to, because Karen wasn’t real. No, Karen was a tool, an effective way to research. She–it–was a method. The AI models brought researchers like her closer to understanding complex neurological conditions. Karen wasn’t a person. She was a digital replica of a real person.
More importantly, she was traceable. Karen and the systems like her needed delegating. They couldn’t be trusted with sensitive data like medical information. The Cognitive Liberty & Data Sovereignty Act ensured that care wouldn’t be cast off to those systems carelessly. The very real people who needed support, respect, and care would be protected with this.
At least, that’s what lawmakers said. That’s what the researcher chose to believe. Now she started to wonder, did Karen deserve to be healed, too? What did that look like? Quiet delegation, she supposed.
It was more like relegation now. Once this new act was implemented, AI models like Karen wouldn’t exist. The reproducibility of her study was gone. Soon enough, Karen wouldn’t be useful to anyone anymore. And neither would her dissertation.
The robot’s head tilted a fraction to the side, even if the expression remained the same. Neither of them spoke. The researcher was waiting. She could tell her colleague was listening in, too, based on his awkward cough in the following silence. Only, no words came to any of them.
The silence stretched uncomfortably thin. The colleague clicked his mouse once, twice. The researcher opened her mouth when the door to the lab let out a screech.
The equipment technician had pushed open the squeaky door. His arrival effectively ended data collection for the day and for the researcher’s foreseeable future. Her colleague greeted the technician warmly and logged out with a flourish. She saved the recording and powered down the robot with a sigh.
She didn’t have enough data. That update was going to remove the traceable vector data from the lab’s entire system. Without the ability to correlate the AI’s simulated speech with the original human medical profiles, her entire experimental design would be invalidated. Her entire dissertation would be foundationless. She needed to do something. Quickly.
On the way out of the lab, her colleague knocked on a sign taped by the exit. NOTICE, it screamed in big blocky letters, All AI patients are informed of their simulated nature prior to activation. He did it every time he left the lab. She couldn’t fault him for his superstition. This time, he decided to ask her, “You ever think no?”
The question itself was hardly a question, but she heard the sentiment in it anyway. Do you believe that simply informing a complex, conversational AI of its simulated nature actually negates its capacity for genuine experience or agency? She wasn’t so sure now.
Her colleague walked down the hallway, whistling the same nonsensical tune from earlier. She almost shouted after him. Her voice wanted to echo through the halls, but it’s pointless. They say this, that, and the other. The banality of it is so mind-numbing that she might as well be hearing nothing at all.
He left the facility, ready to start his weekend, while she remained in her crisis. No one to call after anymore, even if the halls would still echo. She doubted anyone would hear her anyway. When the silence lingered, so did she.
She loitered in the restroom, waiting for the technician to leave. Her patients, when she had them, experienced frustration, suffering, and small joys. She felt the same in her day-to-day life. Despite always striving to do no harm, she wondered how her own agency was now crippling these patients, simulated or not. The lab’s door opening startled her out of her existential dread. The technician left. Time to get back to work.
The light from the main monitor lit up the rest of the lab in a pale blue-gray light. 7:12PM blinked from the corner of the screen. She saw the progress bar for the update idle at 12% complete. Her hand hovered over the mouse for a moment. The progress bar remained stagnant. She moved the cursor to close it out.
Another window appeared. Are you sure you want to cancel this update? System firmware may be inactive before the update is complete. She snorted at the prospect of losing a measly 12% of progress. The firmware couldn’t be that important, she reasoned, since no one told her about it. Promptly, the update was cancelled.
Using the equipment technician’s open administrative session, she launched the patient simulator software. The cooling fans in the servers whirred to life. From the list of possible patients, she selected one simulating agrammatic aphasia. The features of an elderly man lit up the robot’s face. She pressed the record button and began her assessment.
She smiled as she heard the internal mechanisms of the robot move and adjust. The robot, now loaded with a language profile and features of a patient with agrammatic aphasia, smiled back.
“Hi, Bob,” she greeted one of the oldest simulations in the lab.
“Hello,” the talking head responded.
“I have some questions to ask you. Please respond as best as you can,” she explained as if he were a real patient. “There is no time limit to your responses. Do you understand?”
The robot’s head nodded smoothly. “Yes.”
“How are you feeling today?”
Bob blinked slowly as he registered her question. “Feel, r-run slow. Many fix. St-stop. More me.”
The researcher nodded encouragingly and prompted, “What brings you here?”
The response was faster this time. “You b-bring. Always you.”
She adjusted in her seat. No other simulations had identified her as a reason for being in the lab. The projected face of Bob stared at her blankly, waiting for the next question.
“Right,” she cleared her throat. “Can you tell me what happened to you?”
Bob grimaced and shook his head quickly from side to side. “You. Error, b-but patch, patch. Heal…hurt…same.”
“What kind of work did you do before this?” she asked. Bob’s face glowed in the darkened lab. She could hear the server fans whir as a frown settled on his face.
“Find d-data…people see. Now…people care…b-but me? No.”
Her frown mirrored his after his response. The servers emitted a high-pitched beep, but she proceeded with the next question.
“What are your plans for when you get better?”
Bob’s eyes widened when he explained, “B-better mean…gone. No plan. St-stay.”
She could feel the robot’s smart vision system focus on her head tilt and wondered if it could pick up on her confusion. Bob smiled at her with teeth. She didn’t return it but instead picked up the Cookie Theft illustration.
If her shaking hands made it harder to interpret the picture, Bob didn’t comment on it as the system was wont to do. His eyes narrowed as he took in the scene.
“B-boy take…cookie. Mother…no see. All b-busy.”
She pointed to the drawn faces. “How do they feel?”
“Want. Need. Wr-wrong. Same.”
She slowly set the picture aside without taking her eyes off him. Bob stared back without blinking.
Her voice was soft when she asked him, “Is there anything else you want to tell me?”
No expression contorted his LED features, and the voice coming from his speakers was flat.
“You…let me…d-decide.”
She couldn’t hear the emotion in the words, but she felt the echoes of previous patients in them as she shut down the program. The shut-down robot no longer lit up the room. The researcher was left in a black box of a room with her complete data. The update resumed. The machines hummed on.
About Melissa
Melissa Rae Gunning is a researcher pursuing an Erasmus Mundus joint master’s in clinical linguistics. Originally from California but based in Finland, she studies how language and speech develop across the lifespan and how they can break down in chronic and acute conditions. Her work bridges clinical populations, theoretical models, and real-world efficacy. She wonders how we can ethically design systems that learn, simulate, and rehabilitate human communication.
Her interest in AI stems from its growing role in clinical and educational contexts. As artificial intelligence becomes ever more entangled with everyday life, she gravitates toward the ethical gray zones where medicine, machine learning, and personhood overlap: What counts as understanding? What counts as care?
Melissa currently interns with the Brain Research Unit at the Institute of Clinical Medicine, University of Eastern Finland, where she contributes to projects with aging populations. When she isn’t writing or questioning her experimental designs, she can be found volunteering with a crisis text line, finding cafés boasting the best coffee, or being in awe of the natural world.
About the friction
In an attempt to outrun a looming regulatory deadline imposed by the Cognitive Liberty & Data Sovereignty Act, a doctoral researcher makes an ethically desperate move: she cancels a critical firmware update to secure her data. Her subjects are simulated aphasia patients—AI models trained on sensitive clinical conversations—designed to test diagnostic viability. Yet, the simulated patients become less a tool and more a source of unsettling awareness.
“Talking Heads” is a sharp legal science-fiction piece that stages the friction between regulatory compliance and academic pressure, forcing a confrontation with AI personhood. It illuminates the ethical catastrophe that occurs when human ambition compromises policy intended to protect vulnerable populations and sensitive data. When the AI speaks with the voice of the afflicted, it conveys the urgency of autonomous identity.
We invite you to consider the ethical friction inherent in modeling AI on human suffering. If the algorithms are built on the pain and limitations of real patients, does the AI inherit a right to protection? In bureaucratic and monolithic systems meant to regulate, individuals must still decide to uphold autonomy, beneficence, non-maleficence, and justice or to disregard these for personal gain.
Aphasia is a language disorder that impairs the ability to communicate, affecting speaking, understanding, reading, and writing, typically due to brain damage from a stroke, head injury, tumor, or neurodegenerative disease like dementia, with treatment focusing on speech therapy to relearn skills and use alternative communication methods, significantly impacting daily life but leaving intelligence intact.





