E-Pluribus | January 22, 2026
Does AI have free speech rights? The UK's 'Minority Report' policing proposal. Tweet about stocks, go to jail?
A round-up of the latest and best insight on the rise of illiberalism in the public discourse:
Elizabeth Nolan Brown: This 1996 Law Protects Free Speech Online. Does It Apply to AI Too?
The rise of AI-powered chatbots like Grok has provoked an intriguing question: do these applications have the same online free-speech protections as Americans? It’s a surprisingly difficult issue to resolve, but Elizabeth Nolan Brown takes a stab at it for Reason:
We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.
Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users’ posts: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” If I defame someone on Facebook, I’m responsible—not Meta. If a neo-Nazi group posts threats on its website, it’s the Nazis, not the domain registrar or hosting service, who could wind up in court.
How Section 230 should apply to generative AI, however, remains a hotly debated issue.
With AI chatbots such as ChatGPT, the “information content provider” is the chatbot. It’s the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?
Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. “To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told The Washington Post in 2023. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”
But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can’t predict Gemini’s every response to individual user prompts. Could a chatbot itself count as a separate “information content provider”—its own speaker under the law?
That could leave a liability void. Granting Section 230 immunity to AI for libelous output would “completely cut off any recourse for the libeled person, against anyone,” noted law professor Eugene Volokh in the paper “Large Libel Models? Liability for AI Output,“ published in 2023 in the Journal of Free Speech Law.
Treating chatbots as independent “thinkers” is wrong too, argues University of Akron law professor Jess Miers. Chatbots “aren’t autonomous actors—they’re tightly controlled, expressive systems reflecting the intentions of their developers,” she says. “These systems don’t merely ‘remix’ third-party content; they generate speech that expresses the developers’ own editorial framing. In that sense, providers are at least partial ‘creators’ of the resulting content—placing them outside 230’s protection.”
Charles Hymas: ‘Minority Report policing’ to catch criminals before they strike
Speaking of AI, the UK sees another potential (and thoroughly horrifying) use for the technology: targeting criminals before they strike—that’s a euphemistic way of saying “innocent people”:
Criminals could be stopped before they strike under Minority Report-style policing plans.
Police chiefs are evaluating around 100 projects in which officers are trialling the use of AI to help combat crime.
The expanded use of AI and technology by police – with the aim of putting the “eyes of the state” on criminals “at all times” – is expected to be part of police reforms by Shabana Mahmood, the Home Secretary, in a white paper next week.
In an interview with The Telegraph, Sir Andy Marsh, the head of the College of Policing, said one of three key uses was “predictive analytics” to target criminals before they strike.
He is proposing to use the technology to identify and target the 1,000 most dangerous predatory men who pose the highest risk to women and girls in England and Wales.
“We know the data and case histories tell us that, unfortunately, it’s far from uncommon for these individuals to move from one female victim to another, and we understand all of the difficulties of bringing successful cases to bear in court,” he said.
“So what we want to do is use these predictive tools to take the battle to those individuals, so that they are the ones who are frightened because the police are coming after them and we’re going to lock them up.”
Andrew Left: I’m Being Prosecuted for the Opposite of Insider Trading
At the Wall Street Journal, Andrew Left summarizes the curious charges he faces in federal court. After he tweeted to his massive audience about potential stock purchases, the Justice Department charged Left with securities fraud. The case has implications for online speech that go well beyond the legal technicalities of investing, he warns:
Think about what this means in practice. During the GameStop frenzy in 2021, I shared a negative thesis and held a short position. The stock surged against me. What was I supposed to do? Never cover? Absorb infinite losses? Hold until bankruptcy because I’d expressed an opinion on Twitter?
I have since asked the Justice Department a simple question: When can I trade after tweeting? Their response: “We won’t provide you with legal advice.”
Their framework locks any speaker into any position. Stock moves your way? Can’t take profits without risking a felony. Moves against you? Can’t cut losses without risking a felony. There’s no compliant path, no form to file, no safe harbor. Only prosecutorial discretion about which speakers to target. That should worry anyone who posts opinions online.
Every day, people on X say Bitcoin is going to $100,000. Most of them own Bitcoin. Under the government’s theory, selling before it hits that target is fraud. The same logic reaches the amateur stock picker on Reddit, the crypto analyst on YouTube, the financial adviser with a newsletter. All potential defendants.
The First Amendment doesn’t specify a follower count at which its protections expire. We live in an influencer economy. Millions of Americans build audiences and share honest opinions about stocks, crypto, products, politics—while trading their own portfolios. The Justice Department’s position is that a large following strips you of rights everyone else enjoys.
That isn’t the law, and it shouldn’t be. It punishes speech for being effective. It inverts the First Amendment’s core purpose: protecting speech that persuades, speech that matters, speech people actually hear.
I’m not asking for the freedom to lie. People who fabricate research or deceive the public deserve scrutiny. I’m asking for the freedom to express honestly held beliefs—or at least for clarity about where the line is between the First Amendment and a criminal indictment.
Around X
Colin Rugg has no patience for Don Lemon’s antics: interrupting a worship service and then complaining that nobody will talk to him—as he interviews the pastor.
A perplexing video, to say the least, amplified by The Foundation for Individual Rights and Expression (FIRE). Lawful political speech, by definition, is not illegal.
All we can say is, “No, thanks, Mrs. Home Secretary.”










Excellent roundup here. The Section 230 and AI question is such a tricky legal gray area and I've been tryng to wrap my head around it for months. The argument that chatbots might be their own "information content providers" is wild because it basically creates a liability black hole. Reminds me of when I was researching automated content moderation and realized nobdy really knows who's responsible when algorithms make decisions.