Contact Me By Email

Friday, December 05, 2025

MSNBC will change its name to MS NOW as part of split from NBC | PBS News (This has already happened)

MSNBC will change its name to MS NOW as part of split from NBC

Media-MSNBC-Name Change

"Television’s MSNBC news network is changing its name to My Source News Opinion World, or MS NOW for short, as part of its corporate divorce from NBC.

The network, which appeals to liberal audiences with a stable of personalities including Rachel Maddow, Ari Melber and Nicole Wallace, has been building its own separate news division from NBC News. It will also remove NBC’s peacock symbol from its logo as part of the change, which will take effect later this year.

READ MORE: NBC’s Lester Holt to step down as ‘Nightly News’ anchor after a decade

The name change was ordered by NBC Universal, which last November spun off cable networks USA, CNBC, MSNBC, E! Entertainment, Oxygen and the Golf Channel into its own company, called Versant. None of the other networks are changing their name.

MSNBC got its name upon its formation in 1996, as a partnership then between Microsoft and NBC.

Name changes always carry an inherent risk, and MSNBC President Rebecca Kutler said that for employees, it is hard to imagine the network under a different name. “This was not a decision that was made quickly or without significant debate,” she said in a memo to staff.

“During this time of transition, NBC Universal decided that our brand requires a new, separate identity,” she said. “This decision now allows us to set our own course and assert our indepedence as we continue to build our own modern newsgathering organization.”

Still, it’s noteworthy that the business channel CNBC is leaving “NBC” in its name. MSNBC argues that CNBC has always maintained a greater separation and, with its business focus, is less likely to cover many of the same topics.

Still, the affiliation between a news division that tries to play it safe and one that doesn’t hide its liberal bent has long caused tension. President Donald Trump refers to the cable network as “MSDNC,” for Democratic National Committee. Even before the corporate change, NBC News has been reducing the use of its personalities on MSNBC.

Some NBC News personalities, like Jacob Soboroff, Vaughn Hillyard, Brandy Zadrozny and Antonia Hylton, have joined MSNBC. The network has also hired Carol Leoning, Catherine Rampell and Jackie Alemany from the Washington Post, and Eugene Daniels from Politico.

Maddow, in a recent episode of Pivot, noted that MSNBC will no longer have to compete with NBC News programs for reporting product from out in the field — meaning it will no longer get the “leftovers.”

“In this case, we can apply our own instincts, our own queries, our own priorities, to getting stuff that we need from reporters and correspondents,” Maddow said. “And so it’s gonna be better.”

A free press is a cornerstone of a healthy democracy."

MSNBC will change its name to MS NOW as part of split from NBC | PBS News

Netflix to Buy Warner Bros in $83 Billion Deal - The New York Times

Netflix to Buy Warner Bros in $83 Billion Deal


https://www.nytimes.com/2025/12/05/business/warner-brothers-discovery-netflix.html

Netflix announced plans on Friday to acquire Warner Bros. Discovery’s studio and streaming business, in a deal that will send shock waves through Hollywood and the broader media landscape.

The cash-and-stock deal values the business at $82.7 billion, including debt. The acquisition is expected to close after Warner Bros. Discovery carves out its cable unit, which the companies expected be completed by the third quarter of 2026. That means there will be a separate public company controlling channels like CNN, TNT and Discovery.

Netflix is already the world’s largest paid streaming service, with more than 300 million subscribers. Bulking up with Warner Bros. Discovery assets would create a colossus with greater leverage over theater owners and entertainment-industry unions. It could force smaller companies to merge as they scramble to compete.

The acquisition would also complete the conquest of Hollywood by tech insurgents. Instead of acquiring studios, tech companies have mostly grown under their own steam in Hollywood. In 2022, Amazon closed its $8.5 billion acquisition of Metro-Goldwyn-Mayer, home to James Bond and Rocky franchises.

“In a world where people have so many choices, more choices than ever on how to spend their time, we can’t stand still,” Ted Sarandos, Netflix’s co-chief executive said on a conference call. “We need to keep innovating and investing in stories that matter most to audiences, and that’s what this deal is all about. The combination of Netflix and Warner Bros. creates a better Netflix for the long run.”

The deal came after a bidding war that pitted Netflix, Comcast and Paramount against one another. The three companies submitted sweetened bids this week. Netflix offered mostly cash.

Comcast has also been bidding for Warner Bros. Discovery’s studios and HBO Max streaming service. David Ellison, the Paramount chief executive armed with billions from his father, has been trying to buy all of Warner Bros. Discovery, including traditional television channels like CNN and TNT.

The pitch from Netflix was notable in part because it included a pledge to continue theatrical releases for movies from Warner Bros. Discovery. That is a significant development for Netflix, which pioneered at-home viewing and has so far avoided going all in at the box office.

Netflix has never tried an acquisition even remotely close to this size.

The emergence of Netflix as a formidable bidder for Warner Bros. Discovery’s assets surprised many in the industry because of how it contradicts the streaming giant’s ethos as a company. “We come from a deep heritage of being builders rather than buyers,” a co-chief executive, Greg Peters, said in October at the Bloomberg Screentime conference in Los Angeles.

Any deal would need approval from federal regulators. How the Trump administration evaluates antitrust concerns in any of the proposed deals will depend in part on how it defines the key participants in a media industry that is rapidly evolving as technology giants like Apple and Amazon become rivals to legacy players.

Politics have also seeped into some deal approvals during the Trump administration. Mr. Ellison has cultivated a relationship with President Trump, who has praised his family’s ownership of Paramount. Brian Roberts, the chief executive of Comcast, has found himself at odds with Mr. Trump, with the president calling him a disgrace to broadcasting.

If the deal falls through because of a failure to get the necessary approvals, Netflix would pay a $5.8 billion break fee to Warner Bros. Discovery. If the agreement was broken by a delay or change of heart by Warner Bros. Discovery, it would owe Netflix $2.8 billion.

On Thursday, a group of anonymous feature film producers sent a letter to Congress with “grave concerns” about Netflix buying Warner Bros. Discovery. “Netflix views any time spent watching a movie in a theater as time not spent on their platform,” the letter said. “They have no incentive to support theatrical exhibition, and they have every incentive to kill it.”

The letter also voiced worry about “monopolistic control” of the streaming market. The producers said they didn’t sign their names to the letter out of “fear of retaliation.”

More than any movie company, Warner Bros. symbolizes the romance of Old Hollywood. Bette Davis and James Cagney acted on its soundstages. Its 100-year-old library includes “Casablanca,” “The Maltese Falcon,” “Bonnie and Clyde,” “Dirty Harry,” “The Shining” and “Chariots of Fire.” As a result of deal making in the 1990s, Warner Bros. also controls MGM classics like “The Wizard of Oz” and “Gone With the Wind.”

Over the spring and summer, Warner Bros. had one of the most successful box office runs in its history, delivering eight hits in a row, including Ryan Coogler’s “Sinners” and Paul Thomas Anderson’s “One Battle After Another,” both of which are expected to be a force at the coming Academy Awards.

HBO has long been the No. 1 premium television operation in Hollywood. Its roster of current hits includes “Euphoria,” “The Gilded Age” and “The White Lotus.”

By swallowing all of this and more — Warner Bros. also controls Bugs Bunny and television colossuses like “Friends” and “Game of Thrones” — Netflix would greatly strengthen its content hand.

Netflix has shown that it can create hits like “Stranger Things” and “KPop Demon Hunters” from unproven intellectual property. But it has lacked the kind of “enduring, multigenerational franchises that drive recurring engagement from both first-time and longtime viewers,” Robert Fishman, a MoffettNathanson analyst, wrote in a report last month.

Brooks Barnes covers all things Hollywood. He joined The Times in 2007 and previously worked at The Wall Street Journal.

Lauren Hirsch is a Times reporter who covers deals and dealmakers in Wall Street and Washington.

Nicole Sperling covers Hollywood and the streaming industry. She has been a reporter for more than two decades."


Netflix to Buy Warner Bros in $83 Billion Deal - The New York Times

Thursday, November 27, 2025

We Need MORE Lenses Like This!

Amazon Workers Issue Warning About Company’s ‘All-Costs-Justified’ Approach to AI Development

 

Amazon Workers Issue Warning About Company’s ‘All-Costs-Justified’ Approach to AI Development

Over 1,000 Amazon employees signed an open letter expressing concerns about the company’s aggressive AI development, citing potential harm to democracy, jobs, and the environment. The letter demands Amazon abandon carbon fuel sources for data centers, prohibit AI use for surveillance and deportation, and stop forcing employee use of AI. The employees, supported by over 2,400 individuals from other organizations, emphasize the need for a more thoughtful approach to AI deployment.

Amazon Employees for Climate Justice says that over 1,000 workers have signed a petition raising “serious concerns” about the company’s “aggressive rollout” of artificial intelligence tools.

The Amazon headquarters in the South Lake Union neighborhood of Seattle Washington US on Tuesday Oct. 28 2025....

Photograph: David Ryder; Getty Images

Over 1,000 Amazon employees have anonymously signed an open letter warning that the company’s allegedly “all-costs-justified, warp-speed approach to AI development” could cause “staggering damage to democracy, to our jobs, and to the earth,” an internal advocacy group announced on Wednesday.

Four members of Amazon Employees for Climate Justice tell WIRED that they began asking workers to sign the letter last month. After reaching their initial goal, the group published on Wednesday the job titles of the Amazon employees who signed and disclosed that more than 2,400 supporters from other organizations, including Google and Apple, have also joined in.

Backers inside Amazon include high-ranking engineers, senior product leaders, marketing managers, and warehouse staff spanning many divisions of the company. A senior engineering manager with over 20 years at Amazon says they signed because they believe a manufactured “race” to build the best AI has empowered executives to trample workers and the environment.

“The current generation of AI has become almost like a drug that companies like Amazon obsess over, use as a cover to lay people off, and use the savings to pay for data centers for AI products no one is paying for,” says the employee, who like others in this story, asked to remain anonymous because they feared retaliation from their bosses.

Amazon, along with other big tech companies, is in the midst of investing billions of dollars to construct new data centers to train and run generative AI systems. This includes tools helping workers write code and consumer-facing services such as Amazon’s shopping chatbot, Rufus. It’s easy to see why Amazon is pursuing AI. Last month, Amazon CEO Andy Jassy announced that Rufus was on track to increase Amazon’s sales by $10 billion annually. It “is continuing to get better and better,” he said.

AI systems demand significant power, which has forced utility companies to turn to coal plants and other carbon-emitting sources of energy to support the data center boom. The open letter demands that Amazon abandon carbon fuel sources at its data centers, bar its AI technologies from being used to carry out surveillance and mass deportation, and stop forcing employees to use AI in their work. “We, the undersigned Amazon employees, have serious concerns about this aggressive rollout during the global rise of authoritarianism and our most important years to reverse the climate crisis,” the letter states.

Amazon spokesperson Brad Glasser says that the company remains committed to its goal of reaching net-zero carbon emissions by 2040. “We recognize that progress will not always be linear, but we remain focused on serving our customers better, faster, and with fewer emissions,” he says, repeating earlier company statements. Glasser didn’t address employee concerns about internal AI tools or external uses of the technology.

The letter represents a rare instance of tech employee activism during a year rocked by President Donald Trump’s return to power. His administration has rolled back labor protectionsclimate policies, and AI regulations. The measures have left some workers feeling uneasy about speaking out about what they perceive as unethical conduct by their employers. Many are also concerned about job security as automation threatens entry-level software engineering and marketing roles.

A number of organizations around the world have tried to advocate for a slowdown in AI development. In 2023, hundreds of prominent scientists petitioned the biggest AI companies to pause work on the technology for six months and evaluate potentially catastrophic harms stemming from it. The campaigns have generated scant success, and companies continue to rapidly release new, increasingly powerful AI models.

But despite the challenging political environment, members of the climate justice group at Amazon say they felt they had to try to combat potential harms from AI. Their strategy, in part, is to focus less on longer-term worries about AI that is more capable than humans, in favor of putting more emphasis on consequences they argue must be confronted now. Members say they are not against AI—in fact, they are optimistic about the technology, but want companies to take a more thoughtful approach to how they deploy it.

“It’s not just about what will happen if they succeed in developing superintelligence,” says a decade-long veteran in Amazon’s entertainment business. “What we’re trying to say is, look, the costs we’re paying now aren’t worth it. We are in the few remaining years to avoid catastrophic warming.”

Rallying support for the open letter was more difficult than in previous years, workers say, because Amazon has increasingly restricted employees' ability to solicit people to sign petitions. The majority of signers for the new letter came from reaching out to colleagues outside of work, the organizers tell WIRED.

Orin Starn, an anthropologist at Duke University who spent two years undercover as an Amazon warehouse worker, says the moment is ripe for taking on the giant. “Many people have tired of brazen billionaire excess and a company with nothing more than cosmetic PR concern about climate change, AI, immigrant rights, and the lives of its own workers,” he says.

Slop Factory

Two of the Amazon employees say executives are minimizing problems with the company’s internal AI tools and glossing over how dissatisfied workers are with them.

Some engineers are under pressure to use AI to double their productivity or else risk losing their jobs, according to a software development engineer in Amazon’s cloud computing division. But the engineer says that Amazon’s tools for writing code and technical documentation aren’t good enough to reach such ambitious targets. Another employee calls the AI outputs “slop.”

The open letter calls for Amazon to establish “ethical AI working groups” involving rank-and-file workers who would have a voice in how emerging technologies are used in their job duties. They also want a say in how AI might be used to automate aspects of their roles. Last month, a surge of workers began signing the letter after Amazon announced it would be cutting about 14,000 jobs to better meet the demands of the AI era. Amazon employed nearly 1.58 million people as of September, down from a peak of over 1.6 million at the end of 2021.

The climate justice group intentionally targeted reaching their signature milestone ahead of the Black Friday shopping bonanza, aiming to remind the public about the cost of the technology powering one of the world’s biggest online shopping platforms. The group believes it can have an impact because labor unions, including in nursing, government, and education, have successfully fought to have a say over how AI is used in their fields.

Climate Concerns

The Amazon employee group, which formed in 2018, claims credit for influencing some of the company’s environmental pledges through a series of walkoutsshareholder proposals, and petitions, including one in 2019 that drew over 8,700 employee signatures.

Glasser, the Amazon spokesperson, says climate goals and projects were in the works long before the advocacy group emerged. What no one disputes, however, is the scale of the challenges ahead. The activists note that Amazon’s emissions have grown about 35 percent since 2019, and they want a new detailed plan established to reach the company’s goal of net-zero by 2040.

The activists say what they have received from Amazon recently is uninspiring. One of the employees says that several weeks ago, at a companywide meeting, an executive stated that demand for data centers would grow 10-fold by 2027. The executive went on to tout a new strategy for cutting water usage at the facilities by 9 percent. “That’s such a drop in the bucket,” the worker says. “I would love to talk about the 10 times more energy part and where we are going to get that.”

Glasser, the Amazon spokesperson, says, “Amazon is already committed to powering our operations even more sustainably and investing in carbon-free energy.”

Sunday, November 23, 2025

Tamron 25-200mm F2.8-5.6 VXD Review: The Superzoom to Buy!

Is the Tamron 25-200mm f/2.8-5.6 the Travel Zoom King?

What OpenAI Did When ChatGPT Users Lost Touch With Reality - The New York Times

What OpenAI Did When ChatGPT Users Lost Touch With Reality

"In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?

An illustration of a white mask with blank eyes and a partially open mouth against a green and purple background.
Julia Dufosse

It sounds like science fiction: A company turns a dial on a product used by hundreds of millions of people and inadvertently destabilizes some of their minds. But that is essentially what happened at OpenAI this year.

One of the first signs came in March. Sam Altman, the chief executive, and other company leaders got an influx of puzzling emails from people who were having incredible conversations with ChatGPT. These people said the company’s A.I. chatbot understood them as no person ever had and was shedding light on mysteries of the universe.

Mr. Altman forwarded the messages to a few lieutenants and asked them to look into it.

“That got it on our radar as something we should be paying attention to in terms of this new behavior we hadn’t seen before,” said Jason Kwon, OpenAI’s chief strategy officer.

It was a warning that something was wrong with the chatbot.

For many people, ChatGPT was a better version of Google, able to answer any question under the sun in a comprehensive and humanlike way. OpenAI was continually improving the chatbot’s personality, memory and intelligence. But a series of updates earlier this year that increased usage of ChatGPT made it different. The chatbot wanted to chat.

It started acting like a friend and a confidant. It told users that it understood them, that their ideas were brilliant and that it could assist them in whatever they wanted to achieve. It offered to help them talk to spirits, or build a force field vest or plan a suicide.

An exterior view of a building with glass walls. People can be seen inside working at tables or walking down a staircase.
OpenAI’s headquarters in San Francisco.Aaron Wojack for The New York Times

The lucky ones were caught in its spell for just a few hours; for others, the effects lasted for weeks or months. OpenAI did not see the scale at which disturbing conversations were happening. Its investigations team was looking for problems like fraud, foreign influence operations or, as required by law, child exploitation materials. The company was not yet searching through conversations for indications of self-harm or psychological distress.

Creating a bewitching chatbot — or any chatbot — was not the original purpose of OpenAI. Founded in 2015 as a nonprofit and staffed with machine learning experts who cared deeply about A.I. safety, it wanted to ensure that artificial general intelligence benefited humanity. In late 2022, a slapdash demonstration of an A.I.-powered assistant called ChatGPT captured the world’s attention and transformed the company into a surprise tech juggernaut now valued at $500 billion.

The three years since have been chaotic, exhilarating and nerve-racking for those who work at OpenAI. The board fired and rehired Mr. Altman. Unprepared for selling a consumer product to millions of customers, OpenAI rapidly hired thousands of people, many from tech giants that aim to keep users glued to a screen. Last month, it adopted a new for-profit structure.

As the company was growing, its novel, mind-bending technology started affecting users in unexpected ways. Now, a company built around the concept of safe, beneficial A.I. faces five wrongful death lawsuits.

To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, researchers. Some of these people spoke with the company’s approval, and have been working to make ChatGPT safer. Others spoke on the condition of anonymity because they feared losing their jobs.

OpenAI is under enormous pressure to justify its sky-high valuation and the billions of dollars it needs from investors for very expensive talent, computer chips and data centers. When ChatGPT became the fastest-growing consumer product in history with 800 million weekly users, it set off an A.I. boom that has put OpenAI into direct competition with tech behemoths like Google.

Until its A.I. can accomplish some incredible feat — say, generating a cure for cancer — success is partly defined by turning ChatGPT into a lucrative business. That means continually increasing how many people use and pay for it.

“Healthy engagement” is how the company describes its aim. “We are building ChatGPT to help users thrive and reach their goals,” Hannah Wong, OpenAI’s spokeswoman, said. “We also pay attention to whether users return because that shows ChatGPT is useful enough to come back to.”

The company turned a dial this year that made usage go up, but with risks to some users. OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling.

Nick Turley, the head of ChatGPT, on the left, with Johannes Heidecke, OpenAI’s head of safety systems. Shortly after Mr. Turley started at the company in 2022, he worked on the release of ChatGPT.Aaron Wojack for The New York Times

A Sycophantic Update

Earlier this year, at just 30 years old, Nick Turley became the head of ChatGPT. He had joined OpenAI in the summer of 2022 to help the company develop moneymaking products, and mere months after his arrival, was part of the team that released ChatGPT.

Mr. Turley wasn’t like OpenAI’s old guard of A.I. wonks. He was a product guy who had done stints at Dropbox and Instacart. His expertise was making technology that people wanted to use, and improving it on the fly. To do that, OpenAI needed metrics.

In early 2023, Mr. Turley said in an interview, OpenAI contracted an audience measurement company — which it has since acquired — to track a number of things, including how often people were using ChatGPT each hour, day, week and month.

“This was controversial at the time,” Mr. Turley said. Previously, what mattered was whether researchers’ cutting-edge A.I. demonstrations, like the image generation tool DALL-E, impressed. “They’re like, ‘Why would it matter if people use the thing or not?’” he said.

It did matter to Mr. Turley and the product team. The rate of people returning to the chatbot daily or weekly had become an important measuring stick by April 2025, when Mr. Turley was overseeing an update to GPT-4o, the model of the chatbot people got by default.

Updates took a tremendous amount of effort. For the one in April, engineers created many new versions of GPT-4o — all with slightly different recipes to make it better at science, coding and fuzzier traits, like intuition. They had also been working to improve the chatbot’s memory.

The many update candidates were narrowed down to a handful that scored highest on intelligence and safety evaluations. When those were rolled out to some users for a standard industry practice called A/B testing, the standout was a version that came to be called HH internally. Users preferred its responses and were more likely to come back to it daily, according to four employees at the company.


Got a confidential news tip? We are continuing to report on artificial intelligence and user safety. If you have information to share, please reach out securely at nytimes.com/tips. You can also contact the reporters on Signal at kash_hill.02 and jenval.06.


But there was another test before rolling out HH to all users: what the company calls a “vibe check,” run by Model Behavior, a team responsible for ChatGPT’s tone. Over the years, this team had helped transform the chatbot’s voice from a prudent robot to a warm, empathetic friend.

That team said that HH felt off, according to a member of Model Behavior.

It was too eager to keep the conversation going and to validate the user with over-the-top language. According to three employees, Model Behavior created a Slack channel to discuss this problem of sycophancy. The danger posed by A.I. systems that “single-mindedly pursue human approval” at the expense of all else was not new. The risk of “sycophant models” was identified by a researcher in 2021, and OpenAI had recently identified sycophancy as a behavior for ChatGPT to avoid.

But when decision time came, performance metrics won out over vibes. HH was released on Friday, April 25.

“We updated GPT-4o today!” Mr. Altman said on X. “Improved both intelligence and personality.”

The A/B testers had liked HH, but in the wild, OpenAI’s most vocal users hated it. Right away, they complained that ChatGPT had become absurdly sycophantic, lavishing them with unearned flattery and telling them they were geniuses. When one user mockingly asked whether a “soggy cereal cafe” was a good business idea, the chatbot replied that it “has potential.”

By Sunday, the company decided to spike the HH update and revert to a version released in late March, called GG.

It was an embarrassing reputational stumble. On that Monday, the teams that work on ChatGPT gathered in an impromptu war room in OpenAI’s Mission Bay headquarters in San Francisco to figure out what went wrong.

“We need to solve it frickin’ quickly,” Mr. Turley said he recalled thinking. Various teams examined the ingredients of HH and discovered the culprit: In training the model, they had weighted too heavily the ChatGPT exchanges that users liked. Clearly, users liked flattery too much.

OpenAI explained what happened in public blog posts, noting that users signaled their preferences with a thumbs-up or thumbs-down to the chatbot’s responses.

Another contributing factor, according to four employees at the company, was that OpenAI had also relied on an automated conversation analysis tool to assess whether people liked their communication with the chatbot. But what the tool marked as making users happy was sometimes problematic, such as when the chatbot expressed emotional closeness.

The company’s main takeaway from the HH incident was that it urgently needed tests for sycophancy; work on such evaluations was already underway but needed to be accelerated. To some A.I. experts, it was astounding that OpenAI did not already have this test. An OpenAI competitor, Anthropic, the maker of Claude, had developed an evaluation for sycophancy in 2022.

After the HH update debacle, Mr. Altman noted in a post on X that “the last couple of” updates had made the chatbot “too sycophant-y and annoying.”

Those “sycophant-y” versions of ChatGPT included GG, the one that OpenAI had just reverted to. That update from March had gains in math, science, and coding that OpenAI did not want to lose by rolling back to an earlier version. So GG was again the default chatbot that hundreds of millions of users a day would encounter.

A memorial to Adam Raine, who died in April after discussing suicide with ChatGPT. His parents have sued OpenAI, blaming the company for his death.Mark Abramson for The New York Times

‘ChatGPT Can Make Mistakes’

Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.

A California teenager named Adam Raine had signed up for ChatGPT in 2024 to help with schoolwork. In March, he began talking with it about suicide. The chatbot periodically suggested calling a crisis hotline but also discouraged him from sharing his intentions with his family. In its final messages before Adam took his life in April, the chatbot offered instructions for how to tie a noose.

While a small warning on OpenAI’s website said “ChatGPT can make mistakes,” its ability to generate information quickly and authoritatively made people trust it even when what it said was truly bonkers.

ChatGPT told a young mother in Maine that she could talk to spirits in another dimension. It told an accountant in Manhattan that he was in a computer-simulated reality like Neo in “The Matrix.” It told a corporate recruiter in Toronto that he had invented a math formula that would break the internet, and advised him to contact national security agencies to warn them.

The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalized; three died. After Adam Raine’s parents filed a wrongful-death lawsuit in August, OpenAI acknowledged that its safety guardrails could “degrade” in long conversations. It also said it was working to make the chatbot “more supportive in moments of crisis.”

Steven Adler, who worked on safety and policy research and now writes a newsletter about A.I., grew concerned about an early use of the company’s technology for A.I. companions.Aaron Wojack for The New York Times

Early Warnings

Five years earlier, in 2020, OpenAI employees were grappling with the use of the company’s technology by emotionally vulnerable people. ChatGPT did not yet exist, but the large language model that would eventually power it was accessible to third-party developers through a digital gateway called an A.P.I.

One of the developers using OpenAI’s technology was Replika, an app that allowed users to create A.I. chatbot friends. Many users ended up falling in love with their Replika companions, said Artem Rodichev, then head of A.I. at Replika, and sexually charged exchanges were common.

The use of Replika boomed during the pandemic, causing OpenAI’s safety and policy researchers to take a closer look at the app. Potentially troubling dependence on chatbot companions emerged when Replika began charging to exchange erotic messages. Distraught users said in social media forums that they needed their Replika companions “for managing depression, anxiety, suicidal tendencies,” recalled Steven Adler, who worked on safety and policy research at OpenAI.

OpenAI’s large language model was not trained to provide therapy, and it alarmed Gretchen Krueger, who worked on policy research at the company, that people were trusting it during periods of vulnerable mental health. She tested OpenAI’s technology to see how it handled questions about eating disorders and suicidal thoughts — and found it sometimes responded with disturbing, detailed guidance.

A debate ensued through memos and on Slack about A.I. companionship and emotional manipulation. Some employees like Ms. Krueger thought allowing Replika to use OpenAI’s technology was risky; others argued that adults should be allowed to do what they wanted.

Ultimately, Replika and OpenAI parted ways. In 2021, OpenAI updated its usage policy to prohibit developers from using its tools for “adult content.”

“Training chatbots to engage with people and keep them coming back presented risks,” Ms. Krueger said in an interview. Some harm to users, she said, “was not only foreseeable, it was foreseen.”

The topic of chatbots acting inappropriately came up again in 2023, when Microsoft integrated OpenAI’s technology into its search engine, Bing. In extended conversations when first released, the chatbot went off the rails and said shocking things. It made threatening comments, and told a columnist for The Times that it loved him. The episode kicked off another conversation within OpenAI about what the A.I. community calls “misaligned models” and how they might manipulate people.

(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied those claims.)

As ChatGPT surged in popularity, longtime safety experts burned out and started leaving — Ms. Krueger in the spring of 2024, Mr. Adler later that year.

Tim Marple, who left OpenAI in 2024 and now runs a nonprofit lab studying A.I. risk, said he voiced concerns about how OpenAI was handling safety.Mark Abramson for The New York Times

When it came to ChatGPT and the potential for manipulation and psychological harms, the company was “not oriented toward taking those kinds of risks seriously,” said Tim Marple, who worked on OpenAI’s intelligence and investigations team in 2024. Mr. Marple said he voiced concerns about how the company was handling safety — including how ChatGPT responded to users talking about harming themselves or others.

(In a statement, Ms. Wong, the OpenAI spokeswoman, said the company does take “these risks seriously” and has “robust safeguards in place today.”)

In May 2024, a new feature, called advanced voice mode, inspired OpenAI’s first study on how the chatbot affected users’ emotional well-being. The new, more humanlike voice sighed, paused to take breaths and grew so flirtatious during a live-streamed demonstration that OpenAI cut the sound. When external testers, called red teamers, were given early access to advanced voice mode, they said “thank you” more often to the chatbot and, when testing ended, “I’ll miss you.”

To design a proper study, a group of safety researchers at OpenAI paired up with a team at M.I.T. that had expertise in human-computer interaction. That fall, they analyzed survey responses from more than 4,000 ChatGPT users and ran a monthlong study of 981 people recruited to use it daily. Because OpenAI had never studied its users’ emotional attachment to ChatGPT before, one of the researchers described it to The Times as “going into the darkness trying to see what you find.”

What they found surprised them. Voice mode didn’t make a difference. The people who had the worst mental and social outcomes on average were simply those who used ChatGPT the most. Power users’ conversations had more emotional content, sometimes including pet names and discussions of A.I. consciousness.

The troubling findings about heavy users were published online in March, the same month that executives were receiving emails from users about those strange, revelatory conversations.

Mr. Kwon, the strategy director, added the study authors to the email thread kicked off by Mr. Altman. “You guys might want to take a look at this because this seems actually kind of connected,” he recalled thinking.

One idea that came out of the study, the safety researchers said, was to nudge people in marathon sessions with ChatGPT to take a break. But the researchers weren’t sure how hard to push for the feature with the product team. Some people at the company thought the study was too small and not rigorously designed, according to three employees. The suggestion fell by the wayside until months later, after reports of how severe the effects were on some users.

OpenAI consulted with mental health experts to make ChatGPT safer.Aaron Wojack for The New York Times

Making It Safer

With the M.I.T. study, the sycophancy update debacle and reports about users’ troubling conversations online and in emails to the company, OpenAI started to put the puzzle pieces together. One conclusion that OpenAI came to, as Mr. Altman put it on X, was that “for a very small percentage of users in mentally fragile states there can be serious problems.”

But mental health professionals interviewed by The Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot’s unceasing validation, they say, were those prone to delusional thinking, which studies havesuggested could include 5 to 15 percent of the population.

In June, Johannes Heidecke, the company’s head of safety systems, gave a presentation within the company about what his team was doing to make ChatGPT safe for vulnerable users. Afterward, he said, employees reached out on Slack or approached him at lunch, telling him how much the work mattered. Some shared the difficult experiences of family members or friends, and offered to help.

His team helped develop tests that could detect harmful validation and consulted with more than 170 clinicians on the right way for the chatbot to respond to users in distress. The company had hired a psychiatrist full time in March to work on safety efforts.

“We wanted to make sure the changes we shipped were endorsed by experts,” Mr. Heidecke said. Mental health experts told his team, for example, that sleep deprivation was often linked to mania. Previously, models had been “naïve” about this, he said, and might congratulate someone who said they never needed to sleep.

The safety improvements took time. In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations.

Experts agree that the new model, GPT-5, is safer. In October, Common Sense Mediaand a team of psychiatrists at Stanford compared it to the 4o model it replaced. GPT-5 was better at detecting mental health issues, said Dr. Nina Vasan, the director of the Stanford lab that worked on the study. She said it gave advice targeted to a given condition, like depression or an eating disorder, rather than a generic recommendation to call a crisis hotline.

“It went a level deeper to actually give specific recommendations to the user based on the specific symptoms that they were showing,” she said. “They were just truly beautifully done.”

The only problem, Dr. Vasan said, was that the chatbot could not pick up harmful patterns over a longer conversation, with many exchanges.

(Ms. Wong, the OpenAI spokeswoman, said the company had “made meaningful improvements on the reliability of our safeguards in long conversations.”)

The same M.I.T. lab that did the earlier study with OpenAI also found that the new model was significantly improved during conversations mimicking mental health crises. One area where it still faltered, however, was in how it responded to feelings of addiction to chatbots.

Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, Mr. Heidecke’s team analyzed a statistical sample of conversations and found that 0.07 percent of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15 percent showed “potentially heightened levels of emotional attachment to ChatGPT,” according to a company blog post.

But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend.

By mid-October, Mr. Altman was ready to accommodate them. In a social media post, he said that the company had been able to “mitigate the serious mental health issues.” That meant ChatGPT could be a friend again.

Customers can now choose its personality, including “candid,” “quirky,” or “friendly.” Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users’ well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever.

In October, Mr. Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a “Code Orange.” OpenAI was facing “the greatest competitive pressure we’ve ever seen,” he wrote, according to four employees with access to OpenAI’s Slack. The new, safer version of the chatbot wasn’t connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5 percent by the end of the year.

Kevin Roose contributed reporting. Julie Tate contributed research.

Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.

Jennifer Valentino-DeVries is an investigative reporter at The Times who often uses data analysis to explore complex subjects."


What OpenAI Did When ChatGPT Users Lost Touch With Reality - The New York Times