Contact Me By Email

Saturday, May 03, 2025

The Secret AI Experiment That Sent Reddit Into a Frenzy - The Atlantic

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’

"The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.

A blurry, distorted, pixellated image of the Reddit robot's head against an orange background
Illustration by The Atlantic

Produced by ElevenLabs and News Over Audio (Noa) using AI narration. Listen to more stories on the Noa app.

When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

So earlier this week, when members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment “violating,” “shameful,” “infuriating,” and “very disturbing.” As the backlash intensified, the researchers went silent, refusing to reveal their identity or answer questions about their methodology. The university that employs them has announced that it’s investigating. Meanwhile, Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

Joining the chorus of disapproval were fellow internet researchers, who condemned what they saw as a plainly unethical experiment. Amy Bruckman, a professor at the Georgia Institute of Technology who has studied online communities for more than two decades, told me the Reddit fiasco is “the worst internet-research ethics violation I have ever seen, no contest.” What’s more, she and others worry that the uproar could undermine the work of scholars who are using more conventional methods to study a crucial problem: how AI influences the way humans think and relate to one another.

The researchers, based at the University of Zurich, wanted to find out whether AI-generated responses could change people’s views. So they headed to the aptly named subreddit r/changemyview, in which users debate important societal issues, along with plenty of trivial topics, and award points to posts that talk them out of their original position. Over the course of four months, the researchers posted more than 1,000 AI-generated comments on pitbulls (is aggression the fault of the breed or the owner?), the housing crisis (is living with your parents the solution?), DEI programs (were they destined to fail?). The AI commenters argued that browsing Reddit is a waste of time and that the “controlled demolition” 9/11 conspiracy theory has some merit. And as they offered their computer-generated opinions, they also shared their backstories. One claimed to be a trauma counselor; another described himself as a victim of statutory rape.

In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)

Read: The man out to prove how dumb AI still is

The researchers had a tougher time convincing Redditors that their covert study was justified. After they had finished the experiment, they contacted the subreddit’s moderators, revealed their identity, and requested to “debrief” the subreddit—that is, to announce to members that for months, they had been unwitting subjects in a scientific experiment. “They were rather surprised that we had such a negative reaction to the experiment,” says one moderator, who asked to be identified by his username, LucidLeviathan, to protect his privacy. According to LucidLeviathan, the moderators requested that the researchers not publish such tainted work, and that they issue an apology. The researchers refused. After more than a month of back-and-forth, the moderators revealed what they had learned about the experiment (minus the researchers’ names) to the rest of the subreddit, making clear their disapproval.

When the moderators sent a complaint to the University of Zurich, the university noted in its response that the “project yields important insights, and the risks (e.g. trauma etc.) are minimal,” according to an excerpt posted by moderators. In a statement to me, a university spokesperson said that the ethics board had received notice of the study last month, advised the researchers to comply with the subreddit’s rules, and “intends to adopt a stricter review process in the future.” Meanwhile, the researchers defended their approach in a Reddit comment, arguing that “none of the comments advocate for harmful positions” and that each AI-generated comment was reviewed by a human team member before being posted. (I sent an email to an anonymized address for the researchers, posted by Reddit moderators, and received a reply that directed my inquiries to the university.)

Perhaps the most telling aspect of the Zurich researchers’ defense was that, as they saw it, deception was integral to the study. The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

How humans are likely to respond in such a scenario is an urgent issue and a worthy subject of academic research. In their preliminary results, the researchers concluded that AI arguments can be “highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.” (Because the researchers finally agreed this week not to publish a paper about the experiment, the accuracy of that verdict will probably never be fully assessed, which is its own sort of shame.) The prospect of having your mind changed by something that doesn’t have one is deeply unsettling. That persuasive superpower could also be employed for nefarious ends.

Read: Chatbots are cheating on their benchmark tests

Still, scientists don’t have to flout the norms of experimenting on human subjects in order to evaluate the threat. “The general finding that AI can be on the upper end of human persuasiveness—more persuasive than most humans—jibes with what laboratory experiments have found,” Christian Tarsney, a senior research fellow at the University of Texas at Austin, told me. In one recent laboratory experiment, participants who believed in conspiracy theories voluntarily chatted with an AI; after three exchanges, about a quarter of them lost faith in their previous beliefs. Another found that ChatGPT produced more persuasive disinformation than humans, and that participants who were asked to distinguish between real posts and those written by AI could not effectively do so.

Giovanni Spitale, the lead author of that study, also happens to be a scholar at the University of Zurich, and has been in touch with one of the researchers behind the Reddit AI experiment, who asked him not to reveal their identity. “We are receiving dozens of death threats,” the researcher wrote to him, in a message Spitale shared with me. “Please keep the secret for the safety of my family.”

One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep. “One of the pillars of that community is mutual trust,” Spitale told me; it’s part of the reason he opposes experimenting on Redditors without their knowledge. Several scholars I spoke with about this latest ethical quandary compared it—unfavorably—to Facebook’s infamous emotional-contagion study. For one week in 2012, Facebook altered users’ News Feed to see if viewing more or less positive content changed their posting habits. (It did, a little bit.) Casey Fiesler, an associate professor at the University of Colorado at Boulder who studies ethics and online communities, told me that the emotional-contagion study pales in comparison with what the Zurich researchers did. “People were upset about that but not in the way that this Reddit community is upset,” she told me. “This felt a lot more personal.”

Read: AI executives promise cancer cures. Here’s the reality.

The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

The Secret AI Experiment That Sent Reddit Into a Frenzy - The Atlantic

Friday, May 02, 2025

NASA Urges Public To Look At Night Sky Now For ‘Nova’ Location

NASA Urges Public To Look At Night Sky Now For ‘Nova’ Location

Topline

“In the wake of 2024’s total solar eclipse and rare displays of the Northern Lights, a third once-in-a-lifetime sight could be possible in 2025 as a star explodes as a nova for the first time since 1946. With T Coronae Borealis (also called T CrB and the “Blaze Star”) due to become 1,000 times brighter than normal and become visible to the naked eye for the first since 1946, NASA is advising sky-watchers to get to know the patch of sky it’s going to appear in.

Key Facts

T Corona Borealis is a dim star that will briefly become a nova (new star) sometime during 2025, increasing from +10 magnitude, which is invisible to the naked eye, to +2 magnitude, which is about as bright as Polaris, the North Star.

It's a “cataclysmic variable star” and a “recurrent nova” — a star that brightens dramatically on a known timescale, in this case about 80 years. That last happened in 1946, so it's due any day now.

Astronomers first predicted T CrB would explode between April and September 2024 after it suddenly dimmed in 2023 — a telltale sign that an explosion is imminent. However, that didn't happen. It was then predicted by scientists to “go nova” on Thursday, March 27, 2025, but that also failed to happen.

The “Blaze Star” is about 3,000 light-years away from the solar system. When it does finally “go nova,” it will become visible to the naked eye for a few nights.

How To Find T Coronae Borealis (t Crb & ‘the Blaze Star’)

Unless you know where that star is in the night sky, it's not going to be an impactful event. NASA’s Preston Dyches makes that point in a new blog post published this week — and it includes a valuable sky chart (below) showing everyone where to look.

T Coronae Borealis is a dim star in a constellation called Corona Borealis, "Northern Crown," a crescent of seven stars easily visible after dark from the Northern Hemisphere. “You’ll find Corona Borealis right in between the two bright stars Arcturus and Vega, and you can use the Big Dipper’s handle to point you to the right part of the sky,” writes Dyches. “Try having a look for it on clear, dark nights before the nova, so you’ll have a comparison when a new star suddenly becomes visible there.”

He advises practicing finding Corona Borealis in the eastern part of the sky during the first half of the night after dark during May, “so you have a point of comparison when the T CrB nova appears there."”

The Science Behind The Nova

T Coronae Borealis is a binary star system that consists of two stars at the end of their lives: a white dwarf star that’s exhausted its fuel and is cooling down and a red giant star that's cooling and expanding as it ages, expelling hydrogen as it does.

That material is gathering on the surface of the white dwarf. When it reaches a critical point, it triggers a thermonuclear explosion that causes a sudden and dramatic increase in brightness. The explosion only affects its surface, leaving the white dwarf intact, so the whole process can occur again and again, according to NASA.“