Hera Hera ← All posts

The Future of Human-Computer Interaction Is Conversational—And Dangerous

Machines finally speak our language. Here's what we lose.

January 10, 2025

On July 17, 2014, Malaysia Airlines Flight 17 fell from the sky over eastern Ukraine. 298 people dead.

Within hours, the information war began.

Russian state television aired doctored satellite images showing a Ukrainian fighter jet. The Ministry of Defense held press conferences with fabricated radar data. Military bloggers flooded social media with “evidence” of Ukrainian responsibility.

It took the Joint Investigation Team eight years to verify what happened. Eight years of wreckage recovery, shrapnel analysis, cross-referenced phone intercepts. In 2022, a Dutch court finally convicted three men of murder.

Now imagine that investigation happening today, with citizens asking AI for answers.

The AI would synthesize everything. The JIT’s court-tested forensic evidence and the Russian Ministry’s fabricated press conferences. One confident answer. No cracks. No indication that some sources survived legal scrutiny while others were manufactured in Moscow.

55 Savushkina Street, St. Petersburg—the Internet Research Agency's former headquarters. While families buried their dead, operatives here flooded social media with fabricated evidence. The building is now for rent. The disinformation playbook isn't.

Why Conversational AI Makes This Worse

But here’s the problem nobody wants to talk about.

Conversational AI makes the trust problemMIT Sloan exponentially worse.

Here’s the thing: nobody was checking sources anyway.

Search engines started the problem. They ranked what was popular, not what was true. The blue links were there. Almost nobody clicked themUW Center for an Informed Public . Research shows people judge credibility by how a page looks, not by verifying the source.

Then came social media. “The algorithm” became the invisible hand deciding what you saw, what felt true, what shaped your worldview. Studies showActa Psychologica that sharing unverified information is “primarily an intuitive, spontaneous response”, not a deliberate choice. The seams were technically visible. But the feed moved too fast. Scroll, absorb, scroll.

Now conversational AI takes it further.

Fewer links. Sources buried or optional. Just a confident answer, synthesized from government health guidance and Reddit conspiracy threads, blended so smoothly you can’t tell which is which. When AI summaries appearPew Research , people click through to actual sources half as often.

Recent surveys showNAB 76% of Americans are concerned about AI reproducing journalism without verification. Only 26% say they trust AI-generated information. But they use it anyway. They ask, they receive, they act. We’ve learned to distrust consciously while accepting automatically.

We’re building interfaces that make information more accessible while simultaneously making it harder to verify. That’s not progress. That’s a trap.

The Path Forward (And Why It Matters for Europe)

Europe is already leading on this challenge. The EU AI ActEuropean Commission —the world’s first comprehensive AI regulation—requires transparency about AI-generated content and addresses the risks of opaque systems. This reflects a broader push for digital sovereigntyAtlantic Council : the capacity to exercise independence in the digital domain while remaining open and connected.

Commercial AI companies won’t solve this. Their incentives point toward engagement and data collection, not accountability. Research showsScienceDirect that disclosing AI usage actually erodes trust. So the business model rewards confident answers, not transparent sourcing.

That’s why we’re building Hera: public infrastructure for conversational access to verified information. When you ask Hera a question, you get an answer you can trace: the source, the date, the institution that vouches for it.

We’re not pretending to have solved this, yet. But we believe someone has to try building the trust layerEuropean Commission that conversational AI needs. And that it has to be public infrastructure, not another product optimizing for engagement.

What This Means for You

The shift to conversational interfaces is happening whether we prepare for it or not. Within five years, most citizens will interact with information systems through natural language as their primary interface.

The question is whether those systems will serve the public good, or just sound like they do.