Armwood Technology Blog
An Technology blog focusing on portable devices. I have a news Blog @ News . I have a Culture, Politic and Religion Blog @ Opinionand my domain is @ Armwood.Com. I have a Jazz Blog @ Jazz. I have a Human Rights Blog @ Law.
Sunday, June 01, 2025
Saturday, May 31, 2025
Friday, May 30, 2025
Thursday, May 29, 2025
Southwest Joins Other Airlines in Tightening Battery Rules: What to Know - The New York Times
Rules for Portable Batteries on Planes Are Changing. Here’s What to Know.
"You might have to repack or turn off batteries before boarding flights with certain carriers. Southwest is becoming the first major U.S. airlines to tighten restrictions.

The rules around flying with portable batteries are becoming more confusing as some airlines and governments change their policies, citing the risk of fires.
Southwest Airlines is the first of the four biggest U.S. carriers to tighten its rules, citing incidents involving batteries on flights across airlines. Starting Wednesday, it will require passengers to keep portable chargers visible while using them.
Airlines in South Korea, Taiwan, Thailand, Singapore, Malaysia and Hong Kong have also either changed their rules in a similar way or banned the use of portable chargers in-flight since a fire destroyed an Air Busan plane on the tarmac in South Korea in January. It was one of several recent aviation episodes that made travelers anxious.
There is no definitive link between portable batteries and the Air Busan fire, and an investigation is underway.
Because rules vary across airlines, you might find yourself having to repack or turn off batteries when boarding a plane. Here’s what you need to know.
Which airlines have changed their rules and why?
Southwest passengers will not be allowed to charge devices while they are stowed in overhead bins. The rule will help flight attendants act more quickly if a battery overheats or catches fire, Southwest said in a statement.
The Federal Aviation Agency requires only that devices containing lithium-ion batteries are kept in carry-on baggage, and the European Union’s aviation regulator has similar rules.
Rules vary among Europe’s biggest carriers. Ryanair, the low-cost Irish airline, tells passengers to remove lithium batteries before storing bags overhead. Britain’s EasyJet and Germany’s Lufthansa do not.
The South Korean government now requires that passengers keep portable chargers within arm’s reach and out of overhead bins, saying that the rule was implemented to ease anxiety about the risk of battery fires.
Major Taiwanese airlines implemented similar changes after the Air Busan episode. EVA Air and China Airlines announced a ban on using or charging power banks on their planes, although the batteries can still be stored in overhead compartments.
Thai Airways, Thailand’s flagship airline, said it would implement a similar ban on using and charging power banks, citing “incidents of in-flight fires on international airlines, suspected to be linked to power bank usage.” Singapore Airlines and its budget subsidiary, Scoot, also announced new rules.
Malaysia Airlines, the country’s flag carrier, has banned using and charging power banks, and storing them in overhead bins. Hong Kong’s aviation regulator has put a similar regulation into effect for all of the territory’s airlines, including Cathay Pacific.
Since 2016, the International Civil Aviation Organization, the United Nations agency that coordinates global aviation regulations, has banned lithium-ion batteries, the kind commonly found in power banks, from the cargo holds of passenger planes.
But there is no industry standard on how airlines regulate power banks, said Mitchell Fox, the director of the Asia Pacific Center for Aviation Safety.
They have become a part of everyday life only in recent years, and some consumers may be unaware of the risks, he said.
What risks do these batteries pose?
Lithium-ion batteries have been used for decades to power smartphones and laptops, and are commonly used in portable power banks.
Each battery has a cell that can heat up quickly in a chain reaction that causes it to catch fire or explode. The F.A.A. warns that this reaction can happen if the battery is damaged, overcharged, overheated or exposed to water. Manufacturing defects are another potential cause.
Some products that use lithium-ion batteries, including smartphones, laptops and electric vehicles, have strict regulations and quality control standards, said Neeraj Sharma, a professor of chemistry at the University of New South Wales in Sydney, Australia, who studies batteries. Others, like power banks, e-cigarettes, e-bikes and scooters, are less regulated, he said, raising the risk of malfunction.
“Make sure you get your devices from reputable manufacturers,” Professor Sharma said.
How often do batteries catch fire on planes?
The frequency of incidents involving lithium-ion batteries on U.S. airlines has been increasing. There were 84 last year, up from 32 in 2016. These included cases — in the cabins of both passenger and cargo planes — where batteries caught fire, emitted smoke or overheated. Portable chargers were the biggest culprit, followed by e-cigarettes, according to the F.A.A.
Airlines around the world have for years required passengers to pack spare lithium-ion batteries in their carry-on luggage instead of in their checked bags so that any smoke or fire from the batteries can be noticed quickly. In the cargo hold, a fire might not be detected by a plane’s automatic fire-extinguishing system until it has become a critical problem.
What do flight crews do when there is a fire?
Fires in plane cabins that are caused by lithium-ion batteries are rarely deadly, and flight crews are generally well prepared to deal with them, said Keith Tonkin, the managing director of Aviation Projects, an aviation consulting company in Brisbane, Australia.
In many cases, passengers will notice their electronics overheating and inform crew members, who put the device in a thermal containment bag or water, with little disruption to the flight, according to the F.A.A.
Yan Zhuang is a Times reporter in Seoul who covers breaking news.
Francesca Regalado is a reporter on the Express desk, based in Seoul."
Wednesday, May 28, 2025
Tuesday, May 27, 2025
Tech’s Trump Whisperer, Tim Cook, Goes Quiet as His Influence Fades - The New York Times
Tech’s Trump Whisperer, Tim Cook, Goes Quiet as His Influence Fades
"Apple’s chief executive has gone from winning President Trump’s praise to drawing his ire, deepening the company’s woes in a very bad year.

In the run-up to President Trump’s recent trip to the Middle East, the White House encouraged chief executives and representatives of many U.S. companies to join him. Tim Cook, Apple’s chief executive, declined, said two people familiar with the decision.
The choice appeared to irritate Mr. Trump. As he hopscotched from Saudi Arabia to the United Arab Emirates, Mr. Trump took a number of shots at Mr. Cook.
During his speech in Riyadh, Mr. Trump paused to praise Jensen Huang, the chief executive of Nvidia, for traveling to the Middle East along with the White House delegation. Then he knocked Mr. Cook.
“I mean, Tim Cook isn’t here but you are,” Mr. Trump said to Mr. Huang at an event attended by chief executives like Larry Fink of the asset manager BlackRock, Sam Altman of OpenAI, Jane Fraser of Citigroup and Lisa Su of the semiconductor company AMD.
Later in Qatar, Mr. Trump said he “had a little problem with Tim Cook.” The president praised Apple’s investment in the United States, then said he had told Mr. Cook, “But now I hear you’re building all over India. I don’t want you building in India.”
On Friday morning, Mr. Trump caught much of his own administration and Apple’s leadership off guard with a social media post threatening tariffs of 25 percent on iPhones made anywhere except the United States. The post thrust Apple back into the administration’s cross hairs a little over a month after Mr. Cook had lobbied and won an exemption from a 145 percent tariff on iPhones assembled in China and sold in the United States.
The new tariff threat is a reversal of fortune for Mr. Cook. In eight years, he’s gone from one of Mr. Trump’s most beloved chief executives — whom the president mistakenly and humorously called Tim Apple in 2019 — to one of the White House’s biggest corporate targets. The breakdown has been enough to make insiders across Washington and Silicon Valley wonder: Has tech’s leading Trump whisperer lost his voice?
Nu Wexler, principal at Four Corners Public Affairs and a former Washington policy communications executive at Google and Facebook, said Mr. Cook’s “very public relationship” with Mr. Trump has backfired.
“It has put Apple at a disadvantage because every move, including a potential concession from Trump, is scrutinized,” Mr. Wexler said. Because Mr. Trump didn’t “have much incentive to either go easy on Apple or cut a deal on tariffs,” he said, “the incentive to crack down is much stronger.”
Apple did not provide comment. The White House declined to comment on the Middle East trip.
Mr. Trump’s new tariffs followed a report by The Financial Times that Apple’s supplier Foxconn would spend $1.5 billion on a plant in India for iPhones. The president said the tariffs would begin at the end of June and affect all smartphones made abroad, including Samsung’s devices.
Earlier in the week, Mr. Cook had visited Washington for a meeting with Treasury Secretary Scott Bessent. During an appearance on Fox News on Friday, Mr. Bessent said the administration considered overseas production of semiconductors and electronics components “one of our greatest vulnerabilities,” which Apple could help address.
“President Trump has been consistently clear about the need to reshore manufacturing that is critical to our national and economic security, including for semiconductors and semiconductor products,” said Kush Desai, a White House spokesman. He added that the administration “continues to have a productive relationship with Apple.”
The timing of the White House’s new tariff plan couldn’t be worse for Mr. Cook, who has led Apple for nearly 14 years.
Last month, the company suffered a stinging defeat in an App Store trial. The judge in the trial rebuked Apple executives, saying they had “outright lied under oath” and that “Cook chose poorly,” and ruled that Apple had to change how it operates the App Store. Jony Ive, Apple’s former chief designer who became estranged from Mr. Cookand left the company in 2019, joined OpenAI last week to build a potential iPhone competitor. Its Vision Pro mixed reality headset, released in January 2024 to fanfare, has been a disappointment. And in March, Apple postponed its promised release of a new Siri, raising fresh doubts about its ability to compete in the industry’s race to adopt artificial intelligence.
Still, Apple’s market value has increased by more than $2.5 trillion under his leadership, or about $500 million a day since 2011. And Apple remains a moneymaking machine, generating an annual profit of nearly $100 billion.
With Mr. Trump’s re-election, Mr. Cook appeared to be in a strong position to help Apple navigate the new administration. In 2019, Mr. Trump said Mr. Cook was a “great executive because he calls me and others don’t.”
Mr. Cook still occasionally pushed back on the president’s agenda. During an appearance at a conference for Fortune magazine in late 2017, Mr. Cook explained that the company would love to make things in the United States but that China had more engineers and better skills. He appeared before a live audience on MSNBC a few months later and criticized the president’s policy on immigration.
This year, their warm relations have run cold. Mr. Trump is more determined to quickly move manufacturing to the United States, which has made Apple a primary target.
On other administration priorities like dismantling diversity initiatives, Mr. Cook has tried to take a diplomatic position. At its annual general shareholder meeting in February, he said that Apple remained committed to its “North Star of dignity and respect for everyone” and would continue to “create a culture of belonging,” but that it might need to make changes to comply with a changing legal landscape.
The bigger problem has been trade. Apple has stopped short of committing to making the iPhone, iPad or Mac laptops in the United States. Instead, the company has moved to assemble more iPhones in India.
Apple has tried to head off Mr. Trump’s criticisms of its overseas manufacturing by promising to spend $500 billion in the United States over the next four years. Mr. Cook also has emphasized that the company will source 19 billion chips from the United States this year, and will start making A.I. servers in Houston.
Servers haven’t satisfied Mr. Trump. He wants iPhones made in the United States badly enough to create what amounts to an iPhone tariff. It would increase the cost of shipping an iPhone from India or China to the United States by 25 percent. The costs aren’t so staggering that they would damage Apple’s business, but Mr. Trump could always ratchet up the levies until he gets his wish.
“If they’re going to sell it in America, I want it to be built in the United States,” Mr. Trump said on Friday. “They’re able to do that.”
Mr. Cook hasn’t responded publicly.
Tripp Mickle reports on Apple and Silicon Valley for The Times and is based in San Francisco. His focus on Apple includes product launches, manufacturing issues and political challenges. He also writes about trends across the tech industry, including layoffs, generative A.I. and robot taxis."
Monday, May 26, 2025
Sunday, May 25, 2025
Apple Used China to Make a Profit. What China Got in Return Is Scarier.
Apple Used China to Make a Profit. What China Got in Return Is Scarier.
“In “Apple in China,” Patrick McGee argues that by training an army of manufacturers in a “ruthless authoritarian state,” the company has created an existential vulnerability for the entire world.

When you purchase an independently reviewed book through our site, we earn an affiliate commission.
APPLE IN CHINA: The Capture of the World’s Greatest Company, by Patrick McGee
A little more than a decade ago, foreign journalists living in Beijing, including myself, met for a long chat with a top Chinese diplomat. Those were different days, when high-ranking Chinese officials were still meeting with members of the Western press corps. The diplomat whom we met was charming, funny, fluent in English. She also had the latest iPhone in front of her on the table.
I noticed the Apple gadget because at the time, Chinese state news media were unleashing invectives on the Cupertino, Calif.-based company for supposedly cheating Chinese consumers. (It wasn’t true.) There were rumors circulating that Chinese government officials were being told not to flaunt American status symbols. The diplomat’s accouterment proved that wrong.
At the time, one could make the argument that China’s economic modernization was being accompanied by a parallel, if somewhat more laggardly, political reform. But the advent in 2012 of Xi Jinping, the Chinese leader who has consolidated power and re-established the primacy of the Chinese Communist Party, has shattered those hopes. And, as Patrick McGee makes devastatingly clear in his smart and comprehensive “Apple in China,” the American company’s decision under Tim Cook, the current C.E.O., to manufacture about 90 percent of its products in China has created an existential vulnerability not just for Apple, but for the United States — nurturing the conditions for Chinese technology to outpace American innovation.
McGee, who was the lead Apple reporter for The Financial Times and previously covered Asian markets from Hong Kong, takes what we instinctively know — “how Apple used China as a base from which to become the world’s most valuable company, and in doing so, bound its future inextricably to a ruthless authoritarian state” — and comes up with a startling conclusion, backed by meticulous reporting: “that China wouldn’t be China today without Apple.”
Apple says that it has trained more than 28 million workers in China since 2008, which McGee notes is larger than the entire labor force of California. The company’s annual investment in China — not even counting the value of hardware, “which would more than double the figure,” McGee writes — exceeds the total amount the Biden administration dedicated for a “once-in-a-generation” initiative to boost American computer chip production.
“This rapid consolidation reflects a transfer of technology and know-how so consequential,” McGee writes, “as to constitute a geopolitical event, like the fall of the Berlin Wall.”
McGee has a journalist’s knack for developing scenes with a few curated details, and he organizes his narrative chronologically, starting with Apple’s origins as a renegade upstart under Steve Jobs in the 1970s and ’80s. After Jobs’s firing and rehiring comes a corporate mind shift in which a vertically integrated firm falls for the allure of contract manufacturing, sending its engineers abroad to train low-paid workers in how to churn out ever more complicated electronics.
We only really get to Apple in China about 90 pages into the book, and that China, in the mid- to late 1990s, was mainly attractive because of what one China scholar called “low wages, low welfare and low human rights.” McGee relates how one Apple engineer, visiting suppliers in the southern Chinese manufacturing center of Shenzhen, was horrified that there were no elevators in the “slapdash” facility, and that the stairs were built with troubling irregularity: with, say, 12 steps (of varying heights) between the first and second floors, then 18 to the next, then 16, then 24.
But China at the turn of the millennium was in the process of joining the World Trade Organization, and its leaders were banking on an export-led economy that would learn from foreign investors. Starting in the 2000s the Taiwanese mega-supplier Foxconn constructed entire settlements for Chinese workers building Apple electronics. First up on the new assembly lines were iMacs that were produced by what became known as “China speed.”
Less than 15 years after Chinese workers began making Apple products en masse, Chinese consumers were buying them en masse, too. Covering China at the time, I chafed at the popular narrative that reduced Apple’s presence in China to a tale of downtrodden workers at Foxconn and other suppliers. Yes, there were nets outside factory dorms to prevent suicides; and wages remained low. Even Apple admitted to alarming labor abuses in its Chinese supply chain.
But that was only half the story. The iPhone in China signified success, an individualistic, American-accented flavor that seemed to delight both veteran diplomats and Foxconn workers I got to know in southwest China. Those of us who had lived in China for years could see that life was getting freer and richer for most Chinese. By the mid-2010s, it was the United States that seemed behind in terms of integrating apps into daily life. In China, at least in the big cities, we were already living in the tech future.
Yet there were episodes of unease. After Xi came to power, state media campaigns targeted Apple’s Western “arrogance.” Apple acquiesced to Beijing’s demands that it remove the New York Times app from its online store in China and keep Chinese user data in China rather than the United States, prompting worries about government intrusion. As Xi cracked down on labor rights activism, more independent audits of the Apple supply chain ceased.
In 2015, Apple was the largest corporate investor in China, to the tune of about $55 billion a year, according to internal documents McGee obtained for this book. (Cook himself told the Chinese media that the company had created nearly five million jobs there: “I’m not sure there are too many companies, domestic or foreign, who can say that.”) At the same time, Xi laid out “Made in China 2025,” his blueprint for achieving technological self-sufficiency in the next decade, dependent on Apple being what McGee calls “a mass enabler of ‘Indigenous innovation.’”
“As Apple taught the supply chain how to perfect multi-touch glass and make the thousand components within the iPhone,” he writes, “Apple’s suppliers took what they knew and offered it to homegrown companies led by Huawei, Xiaomi, Vivo and Oppo.” Today, some of these premium products come with specs that are increasingly ahead of American design, and have outsold Apple in many major markets.
Sometimes, McGee is too comprehensive. He draws interesting portraits of characters who disappear after a few paragraphs. We do not need to know the full name of the law firm that Apple hired in preparation for a possible bankruptcy in the mid-1990s or even the minutiae of pre-China personnel wrangles, especially when centuries of Chinese history are compressed to less than a page. There are a few Chinese misspellings and miscues — the surname Wang is not, in fact, pronounced quite as “Wong.” And it would have been nice to have gotten more perspectives of Chinese people.
But these are quibbles with an otherwise persuasive exposé of the trillion-dollar company’s uncomfortably close relationship with the global power. China may have enabled Apple to become one of the most profitable companies in the world, but the exploitation goes both ways: This is not just a story of China making Apple, but of Apple making China. Given Xi’s authoritarian hold on power, what began as a feat of manufacturing has troubling consequences for the entire world.
APPLE IN CHINA: The Capture of the World’s Greatest Company | By Patrick McGee | Scribner | 437 pp. | $32
Hannah Beech is a Times reporter based in Bangkok who has been covering Asia for more than 25 years. She focuses on in-depth and investigative stories.“
Friday, May 16, 2025
Thursday, May 15, 2025
Friday, May 09, 2025
Monday, May 05, 2025
A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful - The New York Times
A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
"A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.

Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.
In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.
“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”
More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.
The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”

For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems.
The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.
Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.
“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”
Cursor and Mr. Truell did not respond to requests for comment.
For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.
The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.
When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.
In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.
“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”
Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.
Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.
Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.
Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent.
In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8.
(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.
So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.
“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.
Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.
The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.
“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.
Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.
Karen Weise writes about technology for The Times and is based in Seattle. Her coverage focuses on Amazon and Microsoft, two of the most powerful companies in America."
Saturday, May 03, 2025
The Secret AI Experiment That Sent Reddit Into a Frenzy - The Atlantic
‘The Worst Internet-Research Ethics Violation I Have Ever Seen’
"The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.

Produced by ElevenLabs and News Over Audio (Noa) using AI narration. Listen to more stories on the Noa app.
When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.
So earlier this week, when members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment “violating,” “shameful,” “infuriating,” and “very disturbing.” As the backlash intensified, the researchers went silent, refusing to reveal their identity or answer questions about their methodology. The university that employs them has announced that it’s investigating. Meanwhile, Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”
Joining the chorus of disapproval were fellow internet researchers, who condemned what they saw as a plainly unethical experiment. Amy Bruckman, a professor at the Georgia Institute of Technology who has studied online communities for more than two decades, told me the Reddit fiasco is “the worst internet-research ethics violation I have ever seen, no contest.” What’s more, she and others worry that the uproar could undermine the work of scholars who are using more conventional methods to study a crucial problem: how AI influences the way humans think and relate to one another.
The researchers, based at the University of Zurich, wanted to find out whether AI-generated responses could change people’s views. So they headed to the aptly named subreddit r/changemyview, in which users debate important societal issues, along with plenty of trivial topics, and award points to posts that talk them out of their original position. Over the course of four months, the researchers posted more than 1,000 AI-generated comments on pitbulls (is aggression the fault of the breed or the owner?), the housing crisis (is living with your parents the solution?), DEI programs (were they destined to fail?). The AI commenters argued that browsing Reddit is a waste of time and that the “controlled demolition” 9/11 conspiracy theory has some merit. And as they offered their computer-generated opinions, they also shared their backstories. One claimed to be a trauma counselor; another described himself as a victim of statutory rape.
In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)
Read: The man out to prove how dumb AI still is
The researchers had a tougher time convincing Redditors that their covert study was justified. After they had finished the experiment, they contacted the subreddit’s moderators, revealed their identity, and requested to “debrief” the subreddit—that is, to announce to members that for months, they had been unwitting subjects in a scientific experiment. “They were rather surprised that we had such a negative reaction to the experiment,” says one moderator, who asked to be identified by his username, LucidLeviathan, to protect his privacy. According to LucidLeviathan, the moderators requested that the researchers not publish such tainted work, and that they issue an apology. The researchers refused. After more than a month of back-and-forth, the moderators revealed what they had learned about the experiment (minus the researchers’ names) to the rest of the subreddit, making clear their disapproval.
When the moderators sent a complaint to the University of Zurich, the university noted in its response that the “project yields important insights, and the risks (e.g. trauma etc.) are minimal,” according to an excerpt posted by moderators. In a statement to me, a university spokesperson said that the ethics board had received notice of the study last month, advised the researchers to comply with the subreddit’s rules, and “intends to adopt a stricter review process in the future.” Meanwhile, the researchers defended their approach in a Reddit comment, arguing that “none of the comments advocate for harmful positions” and that each AI-generated comment was reviewed by a human team member before being posted. (I sent an email to an anonymized address for the researchers, posted by Reddit moderators, and received a reply that directed my inquiries to the university.)
Perhaps the most telling aspect of the Zurich researchers’ defense was that, as they saw it, deception was integral to the study. The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.
How humans are likely to respond in such a scenario is an urgent issue and a worthy subject of academic research. In their preliminary results, the researchers concluded that AI arguments can be “highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.” (Because the researchers finally agreed this week not to publish a paper about the experiment, the accuracy of that verdict will probably never be fully assessed, which is its own sort of shame.) The prospect of having your mind changed by something that doesn’t have one is deeply unsettling. That persuasive superpower could also be employed for nefarious ends.
Read: Chatbots are cheating on their benchmark tests
Still, scientists don’t have to flout the norms of experimenting on human subjects in order to evaluate the threat. “The general finding that AI can be on the upper end of human persuasiveness—more persuasive than most humans—jibes with what laboratory experiments have found,” Christian Tarsney, a senior research fellow at the University of Texas at Austin, told me. In one recent laboratory experiment, participants who believed in conspiracy theories voluntarily chatted with an AI; after three exchanges, about a quarter of them lost faith in their previous beliefs. Another found that ChatGPT produced more persuasive disinformation than humans, and that participants who were asked to distinguish between real posts and those written by AI could not effectively do so.
Giovanni Spitale, the lead author of that study, also happens to be a scholar at the University of Zurich, and has been in touch with one of the researchers behind the Reddit AI experiment, who asked him not to reveal their identity. “We are receiving dozens of death threats,” the researcher wrote to him, in a message Spitale shared with me. “Please keep the secret for the safety of my family.”
One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep. “One of the pillars of that community is mutual trust,” Spitale told me; it’s part of the reason he opposes experimenting on Redditors without their knowledge. Several scholars I spoke with about this latest ethical quandary compared it—unfavorably—to Facebook’s infamous emotional-contagion study. For one week in 2012, Facebook altered users’ News Feed to see if viewing more or less positive content changed their posting habits. (It did, a little bit.) Casey Fiesler, an associate professor at the University of Colorado at Boulder who studies ethics and online communities, told me that the emotional-contagion study pales in comparison with what the Zurich researchers did. “People were upset about that but not in the way that this Reddit community is upset,” she told me. “This felt a lot more personal.”
Read: AI executives promise cancer cures. Here’s the reality.
The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.