E-Pluribus | February 2, 2024
To moderate or not to moderate; censorship is a problem in any language; and segregation repackaged.
A round-up of the latest and best musings on the rise of illiberalism in the public discourse:
Sam Kahn: The Case Against Content Moderation
Content moderation by private companies is clearly within their rights, but is it always the best, or even a desirable, route for them to take? Writing at Quillette, Sam Kahn argues that less (or no) moderation is best, using Substack (Pluribus’s platform) as an example of how to do it right.
In their early days, most of the social media platforms were proudly, avowedly laissez-faire about content, in line with the American free speech tradition. Law professor Jonathan Zittrain calls this “the Rights Era” of online governance.
That era ended in the mid to late 2010s. The conventional wisdom is that it collapsed once the internet reached a sort of critical mass and revealed its true nature as a social destabilizer and disseminator of hate speech and misinformation.
In a 6,000-word 2017 post dubbed “the Mark Manifesto,” Facebook’s CEO Mark Zuckerberg wrote, “As we build a global community, this is a moment of truth … It’s about whether we’re building a community that helps keep us safe—that prevents harm, helps during crises, and rebuilds afterwards.” Later that year, Twitter’s CEO Jack Dorsey issued a similar mea culpa for the over-permissiveness of the Rights Era: “We prioritized [safety] in 2016. We updated our policies and increased the size of our teams. It wasn’t enough. … In 2017 we made it our priority and made a lot of progress … We decided to take a more aggressive stance in our rules and how we enforce them.”
[. . .]
People began to use new analogies to describe the web. Zittrain describes, at some point in the late 2010s, a shift from a focus on rights to a “public health” model, in which certain content—“misinformation” and “disinformation” in particular—was perceived as a type of contagion. This, too, soon became mainstream opinion.
In a 2019 New York Times op-ed, Brittan Heller, an attorney for the Anti-Defamation League, went so far as to write: “The idea that platforms like Twitter, Facebook and Instagram should remove hate speech is relatively uncontroversial.” For Douek, the new moderation regime represented “a more mature approach” to speech governance than the First Amendment-infused philosophy that had prevailed in the earlier part of the decade.
The shift to the public health model was widely presented as a necessary response to the volume of hate and misinformation online. But, reading through the press reports and academic literature, it’s possible to see it in different terms: as a panic.
De Keulenaar et al. write that, “The loss of tolerance for hateful and abusive content seems to respond to a number of events on the ground … suggest[ing] a certain causal connection between online content and offline events.” The events to which they allude were, above all, the 2016 election of Donald Trump and the 2017 Charlottesville “Unite the Right” rally. As De Keulenaar et al. explain, the goal of content moderation became “not necessarily adjudicating content as more or less acceptable, but moderating it on the basis of evolving and ever contingent public conceptions of accountability.”
In other words, the more stringent rules weren’t primarily a response to the changing nature of internet traffic itself—it wasn’t that the internet had suddenly shown itself to be less civilized than expected. Instead, the platforms were responding to pressure by “journalists, activists, and politicians” to address the rise of the far-right. As De Keulenaar et al. conclude: “Moderation is more a political than a moral art.”
As the “public health” model took hold, the moderation regimes found themselves making rules that seem absurd in their Byzantine complexity and arbitrariness. Facebook’s 27-page Community Standards document, made public in 2018, prohibits, for example, images of “fully nude close-ups of buttocks unless photoshopped on a public figure,” and permits the statement “migrants are so filthy,” while forbidding the comparison “Irish are the best, but … French suck.”
Even Douek, who considers the new moderation regime to be “salutary,” concedes that the platforms are “largely just ‘making rules up’” and that “it is not just hard to get content right at this scale; it is impossible.”
[. . .]
Substack represents the internet at its best. While the social media platforms have amped up their content moderation and attempted to exert tighter control over speech, Substack has employed a simpler approach: giving users the tools to have a web presence and then assuming that they are grown-up enough to make their own decisions on what content they wish to post or see. It would be a shame if, in the panic over a handful of extremist newsletters, we lost sight of that underlying principle.
Read the whole thing.
Oscar Buynevich: The Censorship Industry’s Plan to Censor Latino Communities
The Foundation for Freedom Online, founded by Mike Benz, a former Trump State Department official, is a self-styled free speech watchdog. Writing at the FFO website, Oscar Buynevich investigates efforts to censor certain voices and ideas in online Spanish-language communities.
NCoC [National Conference on Citizenship (NCoC)] was chartered by congress in 1953, “to harness the patriotic energy and civic involvement surrounding World War II,” but in recent years the organization – one of the fewer than 100 nonprofits nationwide established directly by congress – has redirected its mission towards suppressing online speech. The ATI was launched by NCoC in 2020, and quickly became a pivotal player in the censorship industry. In particular, it promoted the concept of “civic listening” — encouraging American citizens to report each others’ online activities to the ATI’s censorship database.
Cameron Hickey, who previously led Harvard’s Shorenstein Center Information Disorder Lab, was selected to be the Director of ATI in 2020. Hickey seemingly was rewarded for his role in the censorship operation — in addition to directing the ATI, he now serves as the CEO of the entire NcOC.
Hickey’s contributions to the censorship industry include three initiatives incubated within ATI in 2020, all of which have the targeting of Spanish-language online speech as a key component:
Junkipedia: This “digital public infrastructure for civic listening” is a Frankenstein-like censorship tech tool . It gives the ability for censors to flag and collect insights on speech across and narratives across any platform online, even in closed-messaging communities popular amongst the Spanish-language community such as WhatsApp. This database gave analysts from the Election Integrity Partnership, the nexus of the government’s censorship laundering operations, the ability to analyze speech across different platforms and foreign-language communities that had been flagged for censorship.
Ethnic Media Fellowship: The Ethnic Media Fellowship pilot program recruited foreign-language journalists to flag content for censorship. Nine fellows representing different foreign-language communities were selected for the effort to report “problematic content” within their respective community into Junkipedia. One of the reasons for their recruitment was their access to private messaging groups, because disinformation “spreads in closed and encrypted communications platforms that are nearly impossible to explore by anyone but the members of their respective communities.” Their reporting allowed censors visibility into popular narratives amongst different ethnic communities and the ability to skulk around in their private messaging groups.
Civic Listening Corps: This is the grassroots movement of the censorship industry. The Civic Listening Corps is a program that trains volunteers to monitor content and encouraged to police the speech of fellow citizens, flagging any “problematic content” they find for censorship. They sign up for speech monitoring shifts and report content directly into Junkipedia.
The Ethnic Media Fellowship and Junkipedia became instrumental in the censorship industry’s efforts to to target Spanish-language speech online. ATI recruited foreign-language journalists to carry out the work of the project, which was ostensibly started with the sole focus of “track[ing] problematic content related to the 2020 U.S. census.”
[. . .]
There is no clear guidance on boundaries on the speech that fellows chose to flag. Their own report reveals that the “problematic content” they reported was not merely speech that is false or purposefully put out to cause harm. They admit to targeting factually accurate, truthful speech for censorship:
We use the umbrella term “problematic content” as it encompasses several kinds of content: mis- and disinformation, hate speech, conspiracies, and other content that may not be factually incorrect, but can have ill effects nonetheless.
Even private speech, in closed text messaging platforms such as WhatsApp, a highly popular platform for Spanish-speakers, was being reported to the censors. As seen above, 81 “total instances of “problematic content” were collected from WhatsApp by the Ethnic Media fellows.
Read it all.
Ethan Blevins: Segregation By Any Other Name
Critics of some voting reform laws have used the term “Jim Crow” to allege that renewed racial segregation is just around the corner. Actually, those days are already here, but under the guise of “affinity groups.” At Discourse Magazine, Ethan Blevins argues that dividing students by race for certain activities is being justified as providing “comfortable” and “safe” learning environments.
An increasing number of school districts are offering “affinity classes” that cater to specific racial groups. Schools have long offered racially segregated options for electives such as African American history or mentorship programs. But the idea has begun to expand to the wider K-12 curriculum: One school district in Evanston, Illinois, has drawn the media’s eyes recently for expanding affinity course options, now offering segregated courses in the core curriculum, like math and English. Technically, anyone can join, but each class is expressly designed for—and targeted at—a particular racial group.
In reality, “affinity” is just a newfangled term for “segregation.” Schools that support such racial sorting insist these classes are opt-in, benign programs that don’t violate anti-discrimination laws or the Constitution’s equal protection guarantee. They’re wrong.
The supporters of affinity groups and classes claim that they give students a comfortable, safe and inviting environment that improves learning outcomes. One Evanston school official who backs affinity classes told The Wall Street Journal that too often “Black students are expected to conform to a white standard” and that in affinity classes, “you don’t have to shed one ounce of yourself because everything about our space is rooted in Blackness.” The notion that culture and character are pinned to skin pigment undergirds the philosophy behind affinity programs—that races are so different from one another that they should not even learn together.
This idea parrots the very racists who sought to block integration in the late 1950s. Former Alabama Gov. George Wallace, who once thrust himself into a school doorway to block Black students, argued that racial groups were better off teaching and learning within their own “separate racial stations,” otherwise society would merge into one “mongrel unit” that would impede the development of each separate race.
But the frenzied agitprop of the 1957 bigot has now become the callow mantra of the 2024 “anti-racist.” The big difference between now and then, argue the pro-segregationists of our age, is that affinity classes are voluntary. Students of color can remain integrated if they wish to do so, although the schools relentlessly laud the benefits of segregating. Granted, there is a difference between compelling someone to drink poison and simply encouraging them to drink it. But this ignores the obvious choice—tossing out the poison. As Chief Justice John Roberts said in the Supreme Court’s recent affirmative action decision in Students for Fair Admissions v. Harvard, “[e]liminating racial discrimination means eliminating all of it.”
[. . .]
Racial sorting, even when voluntary, is especially harmful to children. Consider another religion example: In Lee v. Weisman, the Supreme Court said a public school had violated the First Amendment’s establishment clause by allowing a clergyman to offer a nonsectarian prayer at a school graduation ceremony, which favored no particular religion. The school argued that the prayer did not compel anyone to do anything—objectors could decline to attend or stay seated when invited to stand for the prayer.
The Supreme Court held that it nonetheless violated the Constitution: The prayer “bore the imprint of the State and thus put school-age children who objected in an untenable position.” The situation would place “subtle coercive pressure” on impressionable children, which “can be as real as any overt compulsion.”
Read it all here.
Around Twitter (X)
Harvard’s Steven Pinker on the lack of ideological self-awareness at The New York Times:
On a related note, the New York Times might next report that Angel Eduardo has revealed his secret identity:
Gratuitous “book banning” accusations aside, it’s interesting to note Ibram X. Kendi’s generous use of “they” versus “we” in this tweet. If in 2024 you disagree with Kendi’s views, you are part of the “they [who] don’t want people to learn how they constructed racism.”
And finally, via Wesley Yang, presented without comment: