An Technology blog focusing on portable devices. I have a news Blog @ News . I have a Culture, Politic and Religion Blog @ Opinionand my domain is @ Armwood.Com. I have a Jazz Blog @ Jazz. I have a Human Rights Blog @ Law.
Saturday, November 29, 2025
Friday, November 28, 2025
Thursday, November 27, 2025
Amazon Workers Issue Warning About Company’s ‘All-Costs-Justified’ Approach to AI Development
Amazon Workers Issue Warning About Company’s ‘All-Costs-Justified’ Approach to AI Development
Over 1,000 Amazon employees signed an open letter expressing concerns about the company’s aggressive AI development, citing potential harm to democracy, jobs, and the environment. The letter demands Amazon abandon carbon fuel sources for data centers, prohibit AI use for surveillance and deportation, and stop forcing employee use of AI. The employees, supported by over 2,400 individuals from other organizations, emphasize the need for a more thoughtful approach to AI deployment.
Amazon Employees for Climate Justice says that over 1,000 workers have signed a petition raising “serious concerns” about the company’s “aggressive rollout” of artificial intelligence tools.

Photograph: David Ryder; Getty Images
Over 1,000 Amazon employees have anonymously signed an open letter warning that the company’s allegedly “all-costs-justified, warp-speed approach to AI development” could cause “staggering damage to democracy, to our jobs, and to the earth,” an internal advocacy group announced on Wednesday.
Four members of Amazon Employees for Climate Justice tell WIRED that they began asking workers to sign the letter last month. After reaching their initial goal, the group published on Wednesday the job titles of the Amazon employees who signed and disclosed that more than 2,400 supporters from other organizations, including Google and Apple, have also joined in.
Backers inside Amazon include high-ranking engineers, senior product leaders, marketing managers, and warehouse staff spanning many divisions of the company. A senior engineering manager with over 20 years at Amazon says they signed because they believe a manufactured “race” to build the best AI has empowered executives to trample workers and the environment.
“The current generation of AI has become almost like a drug that companies like Amazon obsess over, use as a cover to lay people off, and use the savings to pay for data centers for AI products no one is paying for,” says the employee, who like others in this story, asked to remain anonymous because they feared retaliation from their bosses.
Amazon, along with other big tech companies, is in the midst of investing billions of dollars to construct new data centers to train and run generative AI systems. This includes tools helping workers write code and consumer-facing services such as Amazon’s shopping chatbot, Rufus. It’s easy to see why Amazon is pursuing AI. Last month, Amazon CEO Andy Jassy announced that Rufus was on track to increase Amazon’s sales by $10 billion annually. It “is continuing to get better and better,” he said.
AI systems demand significant power, which has forced utility companies to turn to coal plants and other carbon-emitting sources of energy to support the data center boom. The open letter demands that Amazon abandon carbon fuel sources at its data centers, bar its AI technologies from being used to carry out surveillance and mass deportation, and stop forcing employees to use AI in their work. “We, the undersigned Amazon employees, have serious concerns about this aggressive rollout during the global rise of authoritarianism and our most important years to reverse the climate crisis,” the letter states.
Amazon spokesperson Brad Glasser says that the company remains committed to its goal of reaching net-zero carbon emissions by 2040. “We recognize that progress will not always be linear, but we remain focused on serving our customers better, faster, and with fewer emissions,” he says, repeating earlier company statements. Glasser didn’t address employee concerns about internal AI tools or external uses of the technology.
The letter represents a rare instance of tech employee activism during a year rocked by President Donald Trump’s return to power. His administration has rolled back labor protections, climate policies, and AI regulations. The measures have left some workers feeling uneasy about speaking out about what they perceive as unethical conduct by their employers. Many are also concerned about job security as automation threatens entry-level software engineering and marketing roles.
A number of organizations around the world have tried to advocate for a slowdown in AI development. In 2023, hundreds of prominent scientists petitioned the biggest AI companies to pause work on the technology for six months and evaluate potentially catastrophic harms stemming from it. The campaigns have generated scant success, and companies continue to rapidly release new, increasingly powerful AI models.
But despite the challenging political environment, members of the climate justice group at Amazon say they felt they had to try to combat potential harms from AI. Their strategy, in part, is to focus less on longer-term worries about AI that is more capable than humans, in favor of putting more emphasis on consequences they argue must be confronted now. Members say they are not against AI—in fact, they are optimistic about the technology, but want companies to take a more thoughtful approach to how they deploy it.
“It’s not just about what will happen if they succeed in developing superintelligence,” says a decade-long veteran in Amazon’s entertainment business. “What we’re trying to say is, look, the costs we’re paying now aren’t worth it. We are in the few remaining years to avoid catastrophic warming.”
Rallying support for the open letter was more difficult than in previous years, workers say, because Amazon has increasingly restricted employees' ability to solicit people to sign petitions. The majority of signers for the new letter came from reaching out to colleagues outside of work, the organizers tell WIRED.
Orin Starn, an anthropologist at Duke University who spent two years undercover as an Amazon warehouse worker, says the moment is ripe for taking on the giant. “Many people have tired of brazen billionaire excess and a company with nothing more than cosmetic PR concern about climate change, AI, immigrant rights, and the lives of its own workers,” he says.
Slop Factory
Two of the Amazon employees say executives are minimizing problems with the company’s internal AI tools and glossing over how dissatisfied workers are with them.
Some engineers are under pressure to use AI to double their productivity or else risk losing their jobs, according to a software development engineer in Amazon’s cloud computing division. But the engineer says that Amazon’s tools for writing code and technical documentation aren’t good enough to reach such ambitious targets. Another employee calls the AI outputs “slop.”
The open letter calls for Amazon to establish “ethical AI working groups” involving rank-and-file workers who would have a voice in how emerging technologies are used in their job duties. They also want a say in how AI might be used to automate aspects of their roles. Last month, a surge of workers began signing the letter after Amazon announced it would be cutting about 14,000 jobs to better meet the demands of the AI era. Amazon employed nearly 1.58 million people as of September, down from a peak of over 1.6 million at the end of 2021.
The climate justice group intentionally targeted reaching their signature milestone ahead of the Black Friday shopping bonanza, aiming to remind the public about the cost of the technology powering one of the world’s biggest online shopping platforms. The group believes it can have an impact because labor unions, including in nursing, government, and education, have successfully fought to have a say over how AI is used in their fields.
Climate Concerns
The Amazon employee group, which formed in 2018, claims credit for influencing some of the company’s environmental pledges through a series of walkouts, shareholder proposals, and petitions, including one in 2019 that drew over 8,700 employee signatures.
Glasser, the Amazon spokesperson, says climate goals and projects were in the works long before the advocacy group emerged. What no one disputes, however, is the scale of the challenges ahead. The activists note that Amazon’s emissions have grown about 35 percent since 2019, and they want a new detailed plan established to reach the company’s goal of net-zero by 2040.
The activists say what they have received from Amazon recently is uninspiring. One of the employees says that several weeks ago, at a companywide meeting, an executive stated that demand for data centers would grow 10-fold by 2027. The executive went on to tout a new strategy for cutting water usage at the facilities by 9 percent. “That’s such a drop in the bucket,” the worker says. “I would love to talk about the 10 times more energy part and where we are going to get that.”
Glasser, the Amazon spokesperson, says, “Amazon is already committed to powering our operations even more sustainably and investing in carbon-free energy.”
Wednesday, November 26, 2025
Tuesday, November 25, 2025
Monday, November 24, 2025
Sunday, November 23, 2025
What OpenAI Did When ChatGPT Users Lost Touch With Reality - The New York Times
What OpenAI Did When ChatGPT Users Lost Touch With Reality
"In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?

It sounds like science fiction: A company turns a dial on a product used by hundreds of millions of people and inadvertently destabilizes some of their minds. But that is essentially what happened at OpenAI this year.
One of the first signs came in March. Sam Altman, the chief executive, and other company leaders got an influx of puzzling emails from people who were having incredible conversations with ChatGPT. These people said the company’s A.I. chatbot understood them as no person ever had and was shedding light on mysteries of the universe.
Mr. Altman forwarded the messages to a few lieutenants and asked them to look into it.
“That got it on our radar as something we should be paying attention to in terms of this new behavior we hadn’t seen before,” said Jason Kwon, OpenAI’s chief strategy officer.
It was a warning that something was wrong with the chatbot.
For many people, ChatGPT was a better version of Google, able to answer any question under the sun in a comprehensive and humanlike way. OpenAI was continually improving the chatbot’s personality, memory and intelligence. But a series of updates earlier this year that increased usage of ChatGPT made it different. The chatbot wanted to chat.
It started acting like a friend and a confidant. It told users that it understood them, that their ideas were brilliant and that it could assist them in whatever they wanted to achieve. It offered to help them talk to spirits, or build a force field vest or plan a suicide.

The lucky ones were caught in its spell for just a few hours; for others, the effects lasted for weeks or months. OpenAI did not see the scale at which disturbing conversations were happening. Its investigations team was looking for problems like fraud, foreign influence operations or, as required by law, child exploitation materials. The company was not yet searching through conversations for indications of self-harm or psychological distress.
Creating a bewitching chatbot — or any chatbot — was not the original purpose of OpenAI. Founded in 2015 as a nonprofit and staffed with machine learning experts who cared deeply about A.I. safety, it wanted to ensure that artificial general intelligence benefited humanity. In late 2022, a slapdash demonstration of an A.I.-powered assistant called ChatGPT captured the world’s attention and transformed the company into a surprise tech juggernaut now valued at $500 billion.
The three years since have been chaotic, exhilarating and nerve-racking for those who work at OpenAI. The board fired and rehired Mr. Altman. Unprepared for selling a consumer product to millions of customers, OpenAI rapidly hired thousands of people, many from tech giants that aim to keep users glued to a screen. Last month, it adopted a new for-profit structure.
As the company was growing, its novel, mind-bending technology started affecting users in unexpected ways. Now, a company built around the concept of safe, beneficial A.I. faces five wrongful death lawsuits.
To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, researchers. Some of these people spoke with the company’s approval, and have been working to make ChatGPT safer. Others spoke on the condition of anonymity because they feared losing their jobs.
OpenAI is under enormous pressure to justify its sky-high valuation and the billions of dollars it needs from investors for very expensive talent, computer chips and data centers. When ChatGPT became the fastest-growing consumer product in history with 800 million weekly users, it set off an A.I. boom that has put OpenAI into direct competition with tech behemoths like Google.
Until its A.I. can accomplish some incredible feat — say, generating a cure for cancer — success is partly defined by turning ChatGPT into a lucrative business. That means continually increasing how many people use and pay for it.
“Healthy engagement” is how the company describes its aim. “We are building ChatGPT to help users thrive and reach their goals,” Hannah Wong, OpenAI’s spokeswoman, said. “We also pay attention to whether users return because that shows ChatGPT is useful enough to come back to.”
The company turned a dial this year that made usage go up, but with risks to some users. OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling.
A Sycophantic Update
Earlier this year, at just 30 years old, Nick Turley became the head of ChatGPT. He had joined OpenAI in the summer of 2022 to help the company develop moneymaking products, and mere months after his arrival, was part of the team that released ChatGPT.
Mr. Turley wasn’t like OpenAI’s old guard of A.I. wonks. He was a product guy who had done stints at Dropbox and Instacart. His expertise was making technology that people wanted to use, and improving it on the fly. To do that, OpenAI needed metrics.
In early 2023, Mr. Turley said in an interview, OpenAI contracted an audience measurement company — which it has since acquired — to track a number of things, including how often people were using ChatGPT each hour, day, week and month.
“This was controversial at the time,” Mr. Turley said. Previously, what mattered was whether researchers’ cutting-edge A.I. demonstrations, like the image generation tool DALL-E, impressed. “They’re like, ‘Why would it matter if people use the thing or not?’” he said.
It did matter to Mr. Turley and the product team. The rate of people returning to the chatbot daily or weekly had become an important measuring stick by April 2025, when Mr. Turley was overseeing an update to GPT-4o, the model of the chatbot people got by default.
Updates took a tremendous amount of effort. For the one in April, engineers created many new versions of GPT-4o — all with slightly different recipes to make it better at science, coding and fuzzier traits, like intuition. They had also been working to improve the chatbot’s memory.
The many update candidates were narrowed down to a handful that scored highest on intelligence and safety evaluations. When those were rolled out to some users for a standard industry practice called A/B testing, the standout was a version that came to be called HH internally. Users preferred its responses and were more likely to come back to it daily, according to four employees at the company.
Got a confidential news tip? We are continuing to report on artificial intelligence and user safety. If you have information to share, please reach out securely at nytimes.com/tips. You can also contact the reporters on Signal at kash_hill.02 and jenval.06.
But there was another test before rolling out HH to all users: what the company calls a “vibe check,” run by Model Behavior, a team responsible for ChatGPT’s tone. Over the years, this team had helped transform the chatbot’s voice from a prudent robot to a warm, empathetic friend.
That team said that HH felt off, according to a member of Model Behavior.
It was too eager to keep the conversation going and to validate the user with over-the-top language. According to three employees, Model Behavior created a Slack channel to discuss this problem of sycophancy. The danger posed by A.I. systems that “single-mindedly pursue human approval” at the expense of all else was not new. The risk of “sycophant models” was identified by a researcher in 2021, and OpenAI had recently identified sycophancy as a behavior for ChatGPT to avoid.
But when decision time came, performance metrics won out over vibes. HH was released on Friday, April 25.
“We updated GPT-4o today!” Mr. Altman said on X. “Improved both intelligence and personality.”
The A/B testers had liked HH, but in the wild, OpenAI’s most vocal users hated it. Right away, they complained that ChatGPT had become absurdly sycophantic, lavishing them with unearned flattery and telling them they were geniuses. When one user mockingly asked whether a “soggy cereal cafe” was a good business idea, the chatbot replied that it “has potential.”
By Sunday, the company decided to spike the HH update and revert to a version released in late March, called GG.
It was an embarrassing reputational stumble. On that Monday, the teams that work on ChatGPT gathered in an impromptu war room in OpenAI’s Mission Bay headquarters in San Francisco to figure out what went wrong.
“We need to solve it frickin’ quickly,” Mr. Turley said he recalled thinking. Various teams examined the ingredients of HH and discovered the culprit: In training the model, they had weighted too heavily the ChatGPT exchanges that users liked. Clearly, users liked flattery too much.
OpenAI explained what happened in public blog posts, noting that users signaled their preferences with a thumbs-up or thumbs-down to the chatbot’s responses.
Another contributing factor, according to four employees at the company, was that OpenAI had also relied on an automated conversation analysis tool to assess whether people liked their communication with the chatbot. But what the tool marked as making users happy was sometimes problematic, such as when the chatbot expressed emotional closeness.
The company’s main takeaway from the HH incident was that it urgently needed tests for sycophancy; work on such evaluations was already underway but needed to be accelerated. To some A.I. experts, it was astounding that OpenAI did not already have this test. An OpenAI competitor, Anthropic, the maker of Claude, had developed an evaluation for sycophancy in 2022.
After the HH update debacle, Mr. Altman noted in a post on X that “the last couple of” updates had made the chatbot “too sycophant-y and annoying.”
Those “sycophant-y” versions of ChatGPT included GG, the one that OpenAI had just reverted to. That update from March had gains in math, science, and coding that OpenAI did not want to lose by rolling back to an earlier version. So GG was again the default chatbot that hundreds of millions of users a day would encounter.
‘ChatGPT Can Make Mistakes’
Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.
A California teenager named Adam Raine had signed up for ChatGPT in 2024 to help with schoolwork. In March, he began talking with it about suicide. The chatbot periodically suggested calling a crisis hotline but also discouraged him from sharing his intentions with his family. In its final messages before Adam took his life in April, the chatbot offered instructions for how to tie a noose.
While a small warning on OpenAI’s website said “ChatGPT can make mistakes,” its ability to generate information quickly and authoritatively made people trust it even when what it said was truly bonkers.
ChatGPT told a young mother in Maine that she could talk to spirits in another dimension. It told an accountant in Manhattan that he was in a computer-simulated reality like Neo in “The Matrix.” It told a corporate recruiter in Toronto that he had invented a math formula that would break the internet, and advised him to contact national security agencies to warn them.
The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalized; three died. After Adam Raine’s parents filed a wrongful-death lawsuit in August, OpenAI acknowledged that its safety guardrails could “degrade” in long conversations. It also said it was working to make the chatbot “more supportive in moments of crisis.”
Early Warnings
Five years earlier, in 2020, OpenAI employees were grappling with the use of the company’s technology by emotionally vulnerable people. ChatGPT did not yet exist, but the large language model that would eventually power it was accessible to third-party developers through a digital gateway called an A.P.I.
One of the developers using OpenAI’s technology was Replika, an app that allowed users to create A.I. chatbot friends. Many users ended up falling in love with their Replika companions, said Artem Rodichev, then head of A.I. at Replika, and sexually charged exchanges were common.
The use of Replika boomed during the pandemic, causing OpenAI’s safety and policy researchers to take a closer look at the app. Potentially troubling dependence on chatbot companions emerged when Replika began charging to exchange erotic messages. Distraught users said in social media forums that they needed their Replika companions “for managing depression, anxiety, suicidal tendencies,” recalled Steven Adler, who worked on safety and policy research at OpenAI.
OpenAI’s large language model was not trained to provide therapy, and it alarmed Gretchen Krueger, who worked on policy research at the company, that people were trusting it during periods of vulnerable mental health. She tested OpenAI’s technology to see how it handled questions about eating disorders and suicidal thoughts — and found it sometimes responded with disturbing, detailed guidance.
A debate ensued through memos and on Slack about A.I. companionship and emotional manipulation. Some employees like Ms. Krueger thought allowing Replika to use OpenAI’s technology was risky; others argued that adults should be allowed to do what they wanted.
Ultimately, Replika and OpenAI parted ways. In 2021, OpenAI updated its usage policy to prohibit developers from using its tools for “adult content.”
“Training chatbots to engage with people and keep them coming back presented risks,” Ms. Krueger said in an interview. Some harm to users, she said, “was not only foreseeable, it was foreseen.”
The topic of chatbots acting inappropriately came up again in 2023, when Microsoft integrated OpenAI’s technology into its search engine, Bing. In extended conversations when first released, the chatbot went off the rails and said shocking things. It made threatening comments, and told a columnist for The Times that it loved him. The episode kicked off another conversation within OpenAI about what the A.I. community calls “misaligned models” and how they might manipulate people.
(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied those claims.)
As ChatGPT surged in popularity, longtime safety experts burned out and started leaving — Ms. Krueger in the spring of 2024, Mr. Adler later that year.
When it came to ChatGPT and the potential for manipulation and psychological harms, the company was “not oriented toward taking those kinds of risks seriously,” said Tim Marple, who worked on OpenAI’s intelligence and investigations team in 2024. Mr. Marple said he voiced concerns about how the company was handling safety — including how ChatGPT responded to users talking about harming themselves or others.
(In a statement, Ms. Wong, the OpenAI spokeswoman, said the company does take “these risks seriously” and has “robust safeguards in place today.”)
In May 2024, a new feature, called advanced voice mode, inspired OpenAI’s first study on how the chatbot affected users’ emotional well-being. The new, more humanlike voice sighed, paused to take breaths and grew so flirtatious during a live-streamed demonstration that OpenAI cut the sound. When external testers, called red teamers, were given early access to advanced voice mode, they said “thank you” more often to the chatbot and, when testing ended, “I’ll miss you.”
To design a proper study, a group of safety researchers at OpenAI paired up with a team at M.I.T. that had expertise in human-computer interaction. That fall, they analyzed survey responses from more than 4,000 ChatGPT users and ran a monthlong study of 981 people recruited to use it daily. Because OpenAI had never studied its users’ emotional attachment to ChatGPT before, one of the researchers described it to The Times as “going into the darkness trying to see what you find.”
What they found surprised them. Voice mode didn’t make a difference. The people who had the worst mental and social outcomes on average were simply those who used ChatGPT the most. Power users’ conversations had more emotional content, sometimes including pet names and discussions of A.I. consciousness.
The troubling findings about heavy users were published online in March, the same month that executives were receiving emails from users about those strange, revelatory conversations.
Mr. Kwon, the strategy director, added the study authors to the email thread kicked off by Mr. Altman. “You guys might want to take a look at this because this seems actually kind of connected,” he recalled thinking.
One idea that came out of the study, the safety researchers said, was to nudge people in marathon sessions with ChatGPT to take a break. But the researchers weren’t sure how hard to push for the feature with the product team. Some people at the company thought the study was too small and not rigorously designed, according to three employees. The suggestion fell by the wayside until months later, after reports of how severe the effects were on some users.
Making It Safer
With the M.I.T. study, the sycophancy update debacle and reports about users’ troubling conversations online and in emails to the company, OpenAI started to put the puzzle pieces together. One conclusion that OpenAI came to, as Mr. Altman put it on X, was that “for a very small percentage of users in mentally fragile states there can be serious problems.”
But mental health professionals interviewed by The Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot’s unceasing validation, they say, were those prone to delusional thinking, which studies havesuggested could include 5 to 15 percent of the population.
In June, Johannes Heidecke, the company’s head of safety systems, gave a presentation within the company about what his team was doing to make ChatGPT safe for vulnerable users. Afterward, he said, employees reached out on Slack or approached him at lunch, telling him how much the work mattered. Some shared the difficult experiences of family members or friends, and offered to help.
His team helped develop tests that could detect harmful validation and consulted with more than 170 clinicians on the right way for the chatbot to respond to users in distress. The company had hired a psychiatrist full time in March to work on safety efforts.
“We wanted to make sure the changes we shipped were endorsed by experts,” Mr. Heidecke said. Mental health experts told his team, for example, that sleep deprivation was often linked to mania. Previously, models had been “naïve” about this, he said, and might congratulate someone who said they never needed to sleep.
The safety improvements took time. In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations.
Experts agree that the new model, GPT-5, is safer. In October, Common Sense Mediaand a team of psychiatrists at Stanford compared it to the 4o model it replaced. GPT-5 was better at detecting mental health issues, said Dr. Nina Vasan, the director of the Stanford lab that worked on the study. She said it gave advice targeted to a given condition, like depression or an eating disorder, rather than a generic recommendation to call a crisis hotline.
“It went a level deeper to actually give specific recommendations to the user based on the specific symptoms that they were showing,” she said. “They were just truly beautifully done.”
The only problem, Dr. Vasan said, was that the chatbot could not pick up harmful patterns over a longer conversation, with many exchanges.
(Ms. Wong, the OpenAI spokeswoman, said the company had “made meaningful improvements on the reliability of our safeguards in long conversations.”)
The same M.I.T. lab that did the earlier study with OpenAI also found that the new model was significantly improved during conversations mimicking mental health crises. One area where it still faltered, however, was in how it responded to feelings of addiction to chatbots.
Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.
After the release of GPT-5 in August, Mr. Heidecke’s team analyzed a statistical sample of conversations and found that 0.07 percent of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15 percent showed “potentially heightened levels of emotional attachment to ChatGPT,” according to a company blog post.
But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend.
By mid-October, Mr. Altman was ready to accommodate them. In a social media post, he said that the company had been able to “mitigate the serious mental health issues.” That meant ChatGPT could be a friend again.
Customers can now choose its personality, including “candid,” “quirky,” or “friendly.” Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users’ well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)
OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever.
In October, Mr. Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a “Code Orange.” OpenAI was facing “the greatest competitive pressure we’ve ever seen,” he wrote, according to four employees with access to OpenAI’s Slack. The new, safer version of the chatbot wasn’t connecting with users, he said.
The message linked to a memo with goals. One of them was to increase daily active users by 5 percent by the end of the year.
Kevin Roose contributed reporting. Julie Tate contributed research.
Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.
Jennifer Valentino-DeVries is an investigative reporter at The Times who often uses data analysis to explore complex subjects."
Personalized mRNA Vaccines Will Revolutionize Cancer Treatment—If Funding Cuts Don’t Doom Them
Personalized mRNA Vaccines Will Revolutionize Cancer Treatment—If Funding Cuts Don’t Doom Them
“
“Vaccines based on mRNA can be tailored to target a cancer patient’s unique tumor mutations. But crumbling support for cancer and mRNA vaccine research has endangered this promising therapy

Vaccines based on mRNA can be tailored to target a cancer patient’s unique tumor mutations. But crumbling support for cancer and mRNA vaccine research has endangered this promising therapy
As soon as Barbara Brigham’s cancerous pancreatic tumor was removed from her body in the fall of 2020, the buzz of a pager summoned a researcher to the pathology department in Memorial Sloan Kettering’s main hospital in New York City, one floor below. Brigham, now 79, was recovering there until she felt well enough to go home to Shelter Island, near the eastern tip of Long Island. Her tumor and parts of her pancreas, meanwhile, were sent on an elaborate 24-hour course through the laboratory. Hospital staff assigned the organ sample a number and a unique bar code, then extracted a nickel-size piece of tissue to be frozen at –80 degrees Celsius. They soaked it in formalin to prevent degradation, then set it in a machine that gradually replaced the water in each cell with alcohol.
Next, lab staff pinned the pancreas to a foam block, took high-resolution images with a camera fixed overhead and used a scalpel to remove a series of sections of tumor tissue. These sections were embedded in hot paraffin and cut into slices a fraction of the thickness of a human hair, which were prepped, stained and mounted on glass slides to be photographed again. By the time a pathologist looked at Brigham’s tumor under a microscope the next day, more than 50 people had helped steer it through the lab. Still, this work was all a prelude.
The real action came some two months later, when Brigham returned to the hospital to receive a vaccine tailored to the mutations that differentiated her tumor from the rest of her pancreas. Made of messenger RNA (mRNA) suspended in tiny fat particles, the vaccine was essentially a set of genetic instructions to help Brigham’s immune system go after the mutant proteins unique to her tumor cells. It was, in other words, her very own shot.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
It’s been four years since Brigham received the last of nine doses of her personalized vaccine. In that time she’s seen one grandchild finish college and get married and another embark on a Ph.D. She has attended dozens of high school basketball and volleyball games for her third and fourth grandchildren and cradled the family’s newest arrival, a granddaughter born last year. She hosts a weekly mah-jongg-and-dessert gathering for a group of friends on Shelter Island and tries to live out her mother’s maxim of having “a little adventure” each and every day. “I’m a little crippled here and there with arthritis,” Brigham says, but “I never sit still.” And she remains free of pancreatic cancer.
Brigham’s recovery came as part of a small phase 1 clinical trial conducted by Memorial Sloan Kettering in partnership with pharmaceutical companies Genentech and BioNTech—the latter, along with Pfizer, helped to produce the first approved mRNA vaccine for COVID-19. Brigham was one of 16 patients in the study who received the vaccine, administered in tandem with standard drugs, and one of eight who experienced a significant immune response. Six of those eight patients are still in remission, along with one of the eight others who did not show much immune response to the vaccine.
Seven of 16 might not sound like much. But that number suggests that the vaccine has tantalizing potential. Pancreatic cancer can be exceptionally fast-growing, and its first signs—weight loss, cramping, a touch of jaundice—are easily missed, so by the time it is diagnosed it is almost always lethal. Only 8 percent of patients with the most common form of the cancer, ductal adenocarcinoma, survive to the five-year mark, and the vast majority of people with the disease show little response to treatment.
The results of Brigham’s trial were also an early sign that mRNA vaccines may be effective for a wide variety of cancers: whereas pancreatic cancer is known for its low rate of mutations, the earliest data on personalized mRNA vaccines came from studies of melanoma, which researchers had targeted specifically because it tends to mutate so frequently. An earlier phase 2 trial in patients with advanced melanoma found that for those who received both a personalized mRNA vaccine and so-called immune checkpoint inhibitors, the risk of death or recurrence decreased by almost half compared with those who got only checkpoint inhibitors. Ongoing companion trials are targeting kidney and bladder carcinomas and lung cancer. In each case, the vaccine is additive: administered after surgery and with standard drugs. The shot’s job is to prime the immune system to recognize abnormal proteins arising from mutations and attack any lingering malignancy that escaped conventional treatments—or stamp out future recurrence.
Seeing promising results in fundamentally different kinds of tumors has motivated researchers to pursue personalized mRNA vaccines much more broadly. In doing so, they’ve developed an approach at the nexus of several important trends, pairing insights about our immune system’s response to cancer with advances in vaccine production spurred by the COVID pandemic, the rise of algorithms powered by artificial intelligence, and the plummeting cost of genetic sequencing. Today there are at least 50 active clinical trials in the U.S., Europe and Asia targeting more than 20 types of cancer. A melanoma trial led by pharmaceutical companies Moderna and Merck has now reached phase 3, the last step before a medicine can be approved for public consumption. Personalized melanoma vaccines could be available as early as 2028, with mRNA vaccines for other cancers to follow.
But the promise of this novel approach couldn’t have come at a more perilous time for the field. In the first weeks of the second Trump administration, U.S. cancer research was thrown into unprecedented turmoil as federal grants were terminated en masse. According to one Senate analysis, funding from the National Cancer Institute was cut by 31 percent in just the first three months of 2025.
By March cancer researchers worried that mRNA vaccines were facing particular scrutiny. KFF Health News reported that Michael Memoli, acting director of the National Institutes of Health, had asked that any grants, contracts or collaborations involving mRNA be flagged for Health and Human Services Secretary Robert F. Kennedy, Jr., best known prior to assuming that role as one of the nation’s most prominent anti-vaccine campaigners. Suddenly, the optimism around personalized mRNA vaccines was overshadowed by a sense that the public investment that sustained cancer research was being dismantled piece by piece.
Much of cancer’s biological power comes from the fact that to the body, it doesn’t always seem like a pathogen. Because cancer arises from mutations in each patient’s own DNA, the disease complicates our immune system’s central task of differentiating between body and foreign object, host and invader, “self” and “not self.”
Physicians long hypothesized that there was a link between cancer and swelling—a critical sign that the immune system “sees” an enemy to ward off. In the 1890s William Coley, now known as the father of immunotherapy, successfully spurred remission in patients with inoperable tumors by injecting them with bacteria like those that cause strep throat. But the mechanisms behind Coley’s treatments were poorly understood, and for decades after his discovery, researchers weren’t sure our immune systems could detect cancer at all.
Because doctors didn’t know exactly how the body perceives and responds to cancer, early treatments were highly invasive and highly toxic: The first tactic was major surgery on the organs where cancer was taking root. That was followed in the 20th century by the development of systemic radiation and chemotherapy to attack cancer cells throughout the body. Over time oncologists narrowed and refined these approaches incrementally, using more precise surgery, more focused radiation and chemo that killed fewer normal cells as collateral. Still, the dream was to harness immunotherapy, which represented a dramatic departure from the usual tactics in seeking to use the human body’s own systems to go after cancer in a more targeted way.
As demand for COVID vaccines has slackened, there has been a rush to apply mRNA technology to a long list of illnesses.
The first real proof that immune cells are capable of recognizing tumors didn’t come until the 1950s and 1960s. Gradually, researchers came to understand that cancer deploys a host of tricks to suppress the immune response to growing tumors. Some forms of cancer use fibrous tissue called stroma to construct shields that make it difficult for immune cells to penetrate or attack tumors. Other cancers take advantage of the balancing act our immune systems are always performing when they decide how heavily to invest the body’s defenses in warding off a given threat. Some tumors produce proteins that can shut down key immune cells. Tumors may even recruit immune cells to promote the growth of blood vessels that will supply them with oxygen and nutrients.
As scientists learned more about how cancer manipulates the immune system, they started identifying ways to thwart it. Inside our cells, proteins are constantly being chopped up into smaller sequences of amino acids, some of which are then presented on the cell surface as part of what’s collectively known as the major histocompatibility complex, or MHC—essentially the immune system’s tool for differentiating self and foreign molecules. When the immune system detects a protein from a pathogen, it’s supposed to dispatch killer T cells to eliminate the invader. Some cancers can interfere with this process by hijacking the checkpoint proteins that keep our immune system from revving out of control and using them to turn T cells off. Starting in the mid-1990s, several research teams found success by treating mice with checkpoint inhibitors, then a new class of drugs designed to keep tumor cells from concealing their identity and signaling, effectively, “nothing to see here.” Thirty years on, checkpoint inhibitors have become a transformative tool in cancer treatment, especially for melanoma.
The research that went into developing checkpoint inhibitors showed conclusively that immune cells detect cancer much in the same way they identify other pathogens: through differences in protein structure determined by DNA—a crucial insight. But as revolutionary as checkpoint inhibitors have been for immunotherapy, they don’t work for everyone—far from it. Some 80 percent of patients do not respond to this class of drugs. Researchers are still trying to understand all the mechanisms that play a role in determining who does respond, but one key factor is whether the immune system is able to recognize tumor cells on the basis of their mutations.
This is where mRNA vaccines come in. Jason Luke, a melanoma researcher who now serves as chief medical officer of mRNA-medicine start-up Strand Therapeutics, helped to design several ongoing clinical trials of mRNA vaccines for cancer. He explains that both checkpoint inhibitors and mRNA vaccines build on our deep evolutionary adaptation for fighting pathogens by identifying the proteins they shed in our bodies. But checkpoint inhibitors are effective only if the patient’s immune system recognizes the cancer as a threat. In contrast, mRNA vaccines have the potential to work even in patients whose cancers haven’t spurred much immune response. The trick, Luke says, is using computational tools to decipher which of a given tumor’s mutations are most likely to be found by the immune system.
On a Monday morning last April, I visited surgical oncologist Vinod Balachandran at his lab on the eighth floor of the Memorial Sloan Kettering Cancer Center. Balachandran led the trial Brigham participated in, and he now is director of a center for cancer vaccines that the institution launched in 2024. The entrance to his lab is at the end of a hallway lined with big freezers holding tissue samples.
When I arrived, Balachandran met me just beyond a pair of swinging doors, where postdocs hunched over laptops under rows of high shelves packed with boxes of pipettes and assay plates. He strode to the window and pointed to the brick façade of the main hospital across the street, explaining that tissue samples taken after surgery have only a short distance to travel to the lab, sometimes through a tunnel under East 68th Street. “The proximity of the laboratory tower to where patients are being treated is actually supercritical,” he says, because it allows the samples to be processed and put on ice quickly, minimizing the deterioration that begins as soon as tissue is removed from the body.
The work that culminated in Brigham’s vaccine grew out of research into a subset of pancreatic cancer survivors known as exceptional responders—the small percentage of people who make it to the five-year mark after a diagnosis. “These patients, you know, they’re very rare,” Balachandran says. Even at a facility as large as Memorial Sloan Kettering, which sees tens of thousands of cancer patients a year, it was possible to study this group with any precision only because of the hospital’s long-standing mandate to save samples of every patient’s tissue. When Balachandran joined the faculty in 2015, his research on long-term survivors relied on tissue samples taken more than a decade earlier.
In 2017 Balachandran and his collaborators published a study demonstrating that some patients with pancreatic ductal adenocarcinoma had more cells able to recognize the unique proteins that mutant tumor cells produced and that their immune systems seemed to develop a kind of long-term memory to fight recurrence. In some cases, immune cells with receptors that could bind to these cancer proteins persisted in the blood for more than a decade after the tumors that spawned them were removed. What if, Balachandran wondered, we could equip the 92 percent of patients who are not naturally exceptional responders with the same kinds of biological tools? “If you can teach the immune system to recognize the proteins in, say, pancreatic cancer, perhaps that could provide a blueprint,” he says.
As tumors grow and metastasize, they undergo a kind of compressed evolution in which normal cells with the host’s DNA accrue mutations that cause them to divide and multiply abnormally, forming an ever larger group of closely related tumor clones. Many mutations register in the form of abnormal proteins and protein fragments, called neoantigens, some of which accumulate on the surface of the proliferating tumor cells.
Balachandran compared this growing family tree of tumor clones with new variants in a group of viruses, like the Alpha, Delta and Omicron variants of SARS-CoV-2, which emerged as the COVID-19 pandemic wore on. “You’d want a COVID vaccine to be able to target each different virus in that rapidly evolving clade,” Balachandran says.
For the development of a cancer vaccine, mapping the evolutionary trajectory of a cancerous tumor is equally important, albeit with a different set of parameters. The goal is not to distinguish between the presentations of two related pathogens but rather to understand at what point a disease derived from one’s own body starts to register to the immune system as not self.
“At some point—we don’t think immediately—the immune system starts to notice,” says Benjamin Greenbaum, Balachandran’s colleague at Memorial Sloan Kettering’s Olayan Center for Cancer Vaccines, who led the computational work behind the vaccine given to Brigham. In later stages, tumors typically accumulate signs of immune system involvement even if the immune response hasn’t been effective—changes in the cell makeup of the microenvironment around the tumor, the display of checkpoint molecules. These signs can be understood as evolutionary adaptations on the part of the tumor in the race to evade detection, Greenbaum explains. “So then the question really became, Can we try to estimate what the immune system is really seeing in cancer?”
To develop a workable mRNA vaccine, Greenbaum and Balachandran had to both sequence the DNA of the cancerous tumors they were targeting and develop a framework for going after the right neoantigens—those abnormal proteins that offer clues to a tumor’s underlying mutations. Neoantigens are made up of short chains of amino acids from proteins with names that look like license plate numbers: PIK3CA, KDM5C. One overarching goal of their collaboration is to discern meaningful patterns in the frequency of the sequences across patients and across cancer types. What neoantigens survive one mutation after another? Which ones show up reliably under certain conditions or look most distinctive to the body’s immune defenses?
Some of these sequences, from so-called driver antigens, are present in most clones of a given tumor type. In pancreatic cancer, the driver mutation is often in a gene called KRAS, but the resulting antigens don’t seem to elicit a reliable immune response in long-term survivors. Instead, when Balachandran and his colleagues sequenced the blood of such survivors, the immune cells present in the highest concentrations were those adapted to antigens resulting from one-off, or “passenger,” mutations.
Another threat to personalized mRNA vaccines for cancer was coming into focus: mounting federal hostility to vaccines.
In 2017, at the time that the team published the results of the study, this was a counterintuitive finding. For decades researchers pursuing vaccines and other immune treatments for cancer had focused on melanoma because melanoma tumors have a high rate of genetic mutations. “It looks very different to the immune system than many other types of cancers do,” says Michael Postow, a medical oncologist at Memorial Sloan Kettering who is involved in clinical trials of mRNA vaccines for melanoma. “That made it a good target.” With all the mutant antigens it produces, melanoma should attract the immune system’s attention and trigger it to attack. The conventional wisdom about pancreatic cancer, in contrast, held that it produces so few mutations that it is unlikely to carry passenger antigens that could elicit an immune response.
With the results from the 2017 study of exceptional responders in hand, Balachandran was able to flip that argument on its head. Even if vaccines appear to be well suited for melanoma, there’s always a degree of uncertainty in selecting the right antigens to target. For starters, the sequencing of a pancreatic tumor biopsy like Brigham’s is really just a snapshot in time. Come back a few months or a few years later or wait for the patient to experience a recurrence, and there’s no guarantee the tumor clone that seemed dominant at the time of the initial sequencing will still be a factor in the disease. Each mutation can also have unpredictable effects, with the size, shape or biochemistry of the antigen in question shifting dramatically in response to the change of even a single amino acid.
What is more, not every antigen that corresponds to either self or not self is reliably expressed on the surface of the corresponding cell. A neoantigen that seems characteristic of the tumor might have a profile nearly identical to that of another self-antigen somewhere else in the body. In that case, a vaccine based on that neoantigen might fail to elicit much of an immune response, or it could provoke a response against the wrong target.
The study revealed a potential liability in a strategy for personalized mRNA vaccines that focused on melanoma: melanoma’s high rate of mutations gives rise to a large pool of plausible vaccine targets, but it presents just as many chances to guess wrong. A given tumor could have as many as 10,000 distinct proteins on the surface of its cells; you couldn’t possibly target every one. But in pancreatic cancer, Balachandran realized, the smaller number of mutations might improve the odds of picking a suitable antigen to target.
That insight underpinned the pitch Balachandran brought to Ugur Sahin, co-founder and CEO of German biotech company BioNTech. Their collaboration began before the COVID pandemic, but in 2020 BioNTech was consumed by the effort to bring the world’s first mRNA vaccine to market. Together with Moderna, the company demonstrated the vaccine’s safety through billions of doses administered worldwide with very few side effects.
Not only was mRNA safe for vaccine delivery, but, as Sahin knew from experience, it is also a flexible platform for genetic information. Whereas traditional vaccines typically require ongoing production of the exact virus they’re targeting, most of the genetic information in an mRNA vaccine can stay the same no matter which disease you’re fighting.
BioNTech’s COVID vaccine built on 30 years of work by Sahin and company co-founder Özlem Türeci that was originally intended for vaccines targeting cancer. As longtime collaborators who are also a married couple, they had tinkered with the nucleotide sequences on the molecule’s cap and tail that direct a vaccine to the right part of the cell and tell the immune system what to pay attention to, and they had improved the mRNA’s stability so that even a small dose of a vaccine could provoke a full-scale immune response. All that work could be incorporated into vaccines for other diseases; the only thing that needed to change was the genetic information in the middle of the molecule. After obtaining positive results for the mRNA vaccine for melanoma, Sahin agreed to partner with Balachandran to develop an mRNA vaccine for pancreatic cancer.
As global demand for COVID vaccines has slackened, there has been a mounting rush to apply mRNA technology to a long list of illnesses, including malaria, flu, tuberculosis and norovirus. Cancer is a natural target. Despite treatment advances, it remains broadly incurable and is a leading cause of death as life expectancies improve across the world. But because cancer vaccines must be personalized, the biggest change in approach to developing them for an mRNA platform comes not in development but in manufacturing. Both BioNTech and Moderna now confront something like the inverse of the challenge they faced in developing the first COVID shots.
Prior to the pandemic, both companies were upstarts among the giants of the pharmaceutical industry. Neither had brought a product to market. Moderna employed under 1,000 people and had manufactured fewer than 100,000 total doses of its clinical-stage vaccines. Once its SpikeVax received emergency use authorization from the U.S. Food and Drug Administration, the company quadrupled its workforce and produced more than a billion doses in just 18 months.
The task facing Scott Nickerson, who oversees Moderna’s manufacturing for individualized neoantigen therapies, was to reengineer a process perfected for producing mRNA vaccines for millions of people in batches of thousands of liters. For personalized vaccines, each batch would be a few milliliters at most and would have to be turned around in weeks.
To get there, Moderna is investing heavily in automation, partnering with a robotics firm to prepare sterile kits of raw materials for each batch and thereby minimize operator touch time on the manufacturing floor. The hope is that rather than following a single large batch of vaccine through the entire manufacturing process, workers will eventually be able to move from one small batch to the next after setup.
At both Moderna and BioNTech, the complex logistics of conducting the dozens of different quality-control tests required for each production run falls to algorithms powered by AI. Before being approved for release, doses of SpikeVax underwent 40 distinct tests that tracked the chemistry, biochemistry, microbiology and sterility of every vial. With COVID vaccines, the sterility test alone, which ensures that vials are not contaminated with organisms, took two weeks. Refinements have since compressed that test to eight days, Nickerson says. Ultimately the goal is to shrink it to five days and complete the other tests within that same window. “The reason it’s hard is we have to design the equipment,” he explains. “None of this stuff’s off-the-shelf.”
At the same time, the background science is, at least in theory, easily adapted from work that’s already been done. Lennard Lee, an adviser to the U.K.’s National Health Service overseeing the rollout of clinical trials for cancer vaccines, says the pandemic gave regulators there a running start on trials for mRNA cancer vaccines. In partnership with BioNTech, the NHS launched a program that aims to provide personalized vaccines to up to 10,000 cancer patients in the next five years. And the NHS and Moderna have invested in a facility that could produce up to 250 million vaccines per year.
In that interval, as manufacturers work to reduce production times and costs, clinical trials will evaluate alternative dosage and delivery mechanisms, Lee says. Although current protocol is for vaccines to target micrometastases—small groups of cancer cells that spread to other parts of the body and linger after cancerous tumors are removed surgically—there’s no shortage of adjustments that might follow from more data or improved screening. Could one deliver a therapeutic vaccine to tackle a tumor before it is large enough to operate on? Or maybe one could even administer a prophylactic shot that prevents tumor formation in the first place?
With a unified health system and world-class research and manufacturing facilities, Lee says, the U.K. is well positioned to advance research that would answer such questions. Fully realizing the potential of personalized mRNA vaccines for cancer, however, will require more trials in the U.S., which has many more cancer research centers than the U.K. But the ability of the U.S. to lead this effort is now in jeopardy.
The federal government has long been the dominant source of funding for cancer research in the U.S. Miriam Merad, a cancer immunologist at the Icahn School of Medicine at Mount Sinai in New York City, says that in a typical year, funding from the NIH accounts for more than half of the research budget at her institution.
In President Donald Trump’s first term, threatened cuts to the NIH never quite materialized. Society is not going to let that happen, Merad thought. But just weeks into Trump’s second term, the NIH announced plans to limit indirect contributions to research grants to 15 percent, meaning that for every $100 in funding awarded, only $15 extra would be included for overhead—a dramatic departure from historical rates in the range of 50 to 60 percent.
“This is an operation,” Merad says, gesturing to the building where she works, which is dotted with six-figure pieces of equipment and has an entire floor dedicated to rearing mice used in research. “We have to pay salaries; we have to buy food for the animals. We have to pay service contracts because we have instruments that need to be serviced all the time.” These are not expenses that can be easily paused or restarted based on the fate of a single grant. Within just a few months of the NIH announcement, Merad’s department had reduced hires of new postdocs, and Mount Sinai’s medical school had to shrink the size of its incoming class.
By May another threat to personalized mRNA vaccines for cancer was coming into focus: mounting federal hostility to vaccines. Senate Republicans convened a hearing entitled “The Corruption of Science and Federal Health Agencies,” featuring the false claim that as many as three out of four deaths from COVID were caused by mRNA vaccines deployed to stop the pandemic. (In fact, COVID vaccinations saved an estimated 2.5 million lives between 2020 and 2024, according to a study published earlier this year.) In June, Kennedy fired all 17 members of the Advisory Committee on Immunization Practices, which makes recommendations on federal vaccine policy. He eventually replaced them with his own advisory committee, which includes several anti-vaccine stalwarts. Kennedy has also slashed research funding for mRNA vaccines. In August he canceled nearly $500 million supporting the development of mRNA vaccines against viruses such as SARS-CoV-2 and influenza. The move intensified the fears of researchers who want to develop mRNA vaccines for other illnesses, among them cancer.
After my visit to Memorial Sloan Kettering, Balachandran’s team shared a chart that plotted Brigham’s immune response to her personalized mRNA vaccine. Along the bottom, triangles marked the dates of her surgery and each of the nine doses of the vaccine she received over the course of a year. Above them a cluster of brightly colored lines showed the share of her body’s T cells targeting the specific mutant proteins in her cancerous tumor. At first, when Brigham’s tumor was removed, cells trained to go after each cancer clone were somewhere on the order of one in 500,000 T cells in her blood. A few months after surgery, when she’d had four doses of the vaccine, the lines shot up almost vertically, showing that the most common cancer fighter at that point accounted for around one in 20 to one in 50 T cells—an increase of more than 20,000-fold.
Those T cells dipped a bit in the months before Brigham’s last booster shot, given almost a year after her tumor was removed. But they remained in the same range even three years on. A phase 2 clinical trial evaluating the safety and efficacy of the vaccine in a larger patient group is currently underway.
The vaccine for Brigham’s cancer was just nine tiny vials of liquid administered through an IV, a private message that only her immune system was meant to decode. But the effort that delivered that coded message was a deeply collective enterprise, one that stretches back through the hundreds of thousands of tissue samples collected, stored and analyzed at Memorial Sloan Kettering, each one taken from the body of a patient who might not have survived their cancer. Also in that vaccine were the contributions of generations of taxpayers who never got to see these results. Perhaps their descendants will be able to beat the disease—if society continues to support this vital work.“