Contact Me By Email

Wednesday, December 06, 2023

How Nations Are Losing a Global Race to Tackle A.I.’s Harms - The New York Times

How Nations Are Losing a Global Race to Tackle A.I.’s Harms

"Alarmed by the power of artificial intelligence, Europe, the United States and others are trying to respond — but the technology is evolving more rapidly than their policies.

An illustration of the unveiling of artificial intelligence, showing some countries in support, some in opposition.
Hokyoung Kim

By Adam Satariano and Cecilia Kang

Adam Satariano reported from Brussels, London and Strasbourg, France. Cecilia Kang reported from Washington.

When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.

E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.

Then came ChatGPT.

The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.

Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.

Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence. Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.

The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.

At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace. That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.

Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.

The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems. A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.

“The jury is still out about whether you can regulate this technology or not,” said Andrea Renda, a senior research fellow at the Center for European Policy Studies, a think tank in Brussels. “There’s a risk this E.U. text ends up being prehistorical.”

The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems. Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.

Without united action soon, some officials warned, governments may get further left behind by the A.I. makers and their breakthroughs.

“No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”

Europe takes the lead

In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.

The group debated whether there were already enough European rules to protect against the technology and considered potential ethics guidelines, said Nathalie Smuha, a legal scholar in Belgium who coordinated the group.

But as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?” she said.

In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.

The report rippled through the insular world of E.U. policymaking. Ursula von der Leyen, the president of the European Commission, made the topic a priority on her digital agenda. A 10-person group was assigned to build on the group’s ideas and draft a law. Another committee in the European Parliament, the European Union’s co-legislative branch, held nearly 50 hearings and meetings to consider A.I.’s effects on cybersecurity, agriculture, diplomacy and energy.

In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.

So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous.

Under the proposal, organizations offering risky A.I. tools must meet certain requirements to ensure those systems are safe before being deployed. A.I. software that created manipulated videos and “deepfake” images must disclose that people are seeing A.I.-generated content. Other uses were banned or restricted, such as live facial recognition software. Violators could be fined 6 percent of their global sales.

Some experts warned that the draft law did not account enough for A.I.’s future twists and turns.

“They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”

E.U. leaders were undeterred.

“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.

A blind spot

Nineteen months later, ChatGPT arrived.

The European Council, another branch of the European Union, had just agreed to regulate general purpose A.I. models, but the new chatbot reshuffled the debate. It revealed a “blind spot” in the bloc’s policymaking over the technology, said Dragos Tudorache, a member of the European Parliament who had argued before ChatGPT’s release that the new models must be covered by the law. These general purpose A.I. systems not only power chatbots but can learn to perform many tasks by analyzing data culled from the internet and other sources.

E.U. officials were divided over how to respond. Some were wary of adding too many new rules, especially as Europe has struggled to nurture its own tech companies. Others wanted more stringent limits.

“We want to be careful not to underdo it, but not overdo it as well and overregulate things that are not yet clear,” said Mr. Tudorache, a lead negotiator on the A.I. Act.

By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.

Policymakers were still working on compromises as negotiations over the law’s language entered a final stage this week.

A European Commission spokesman said the A.I. Act was “flexible relative to future developments and innovation friendly.”

The Washington game

Jack Clark, a founder of the A.I. start-up Anthropic, had visited Washington for years to give lawmakers tutorials on A.I. Almost always, just a few congressional aides showed up.

But after ChatGPT went viral, his presentations became packed with lawmakers and aides clamoring to hear his A.I. crash course and views on rule making.

“Everyone has sort of woken up en masse to this technology,” said Mr. Clark, whose company recently hired two lobbying firms in Washington.

Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.

“We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”

Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.

In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.

An illustration of tech executives discussing policy with the Senate leader Chuck Schumer. All are in dark blue profile against a yellow background.
Hokyoung Kim

In Washington, the activity around A.I. has been frenetic — but with no legislation to show for it.

In May, after a White House meeting about A.I., the leaders of Microsoft, OpenAI, Google and Anthropic were asked to draw up self-regulations to make their systems safer, said Brad Smith, Microsoft’s president. After Microsoft submitted suggestions, the commerce secretary, Gina M. Raimondo, sent the proposal back with instructions to add more promises, he said.

Two months later, the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.

“It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”

In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”

Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.

In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.

Mr. Schumer said the companies knew the technology best.

In some cases, A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.

“China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.

Fleeting collaboration

In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.

After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”

Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.

Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology. 

Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.

“Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”

Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.

A European Commission spokesman said that the United States and Europe had “worked together closely” on A.I. policy and that the Group of 7 countries unveiled a voluntary code of conduct in October.

A State Department spokesman said there had been “ongoing, constructive conversations” with the European Union, including the G7 accord. At the meeting in Sweden, he added, Mr. Blinken emphasized the need for a “unified approach” to A.I.

Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turinghelped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.

The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.

The talks, in the end, produced a deal to keep talking."


How Nations Are Losing a Global Race to Tackle A.I.’s Harms - The New York Times

Monday, December 04, 2023

Intriguing Canon Patent Reveals More Boundary-Pushing Lenses

Intriguing Canon Patent Reveals More Boundary-Pushing Lenses

Intriguing Canon Patent Reveals More Boundary-Pushing Lenses

“Hold on to your seats, because Canon may be gearing up to release more ridiculous lenses, as a new patent application details.

The patent application, numbered 2023170260 and originating from Japan, introduces optical formulas for a range of RF mount lenses, including:

  • 70-200mm f/2-2.8 DS
  • 70-135mm f/2.5 DS
  • 28-70mm f/1.6-2 DS
  • 80-150mm f/1.8 DS
  • 35-70mm f/1.8-2 DS

What sets these lenses apart from the crowd is the incorporation of Defocus Smoothing (DS) technology, which uses apodization filters to produce particularly smooth bokeh, albeit at the expense of light transmission. Canon's patent application seeks to address the challenge of achieving a uniform and excellent apodization effect across the entire zoom range while effectively minimizing peripheral dimming.

The most jaw-dropping aspect of the proposed lenses, though, is the astonishingly wide maximum apertures featured in these zoom lenses. Such apertures would redefine what photographers can achieve in various shooting conditions with a zoom lens. I can personally attest how much owning the RF 28-70mm f/2 L USM has changed the way I shoot, and to see the company exploring an even wider option is truly insane. The exceptionally large apertures not only enable photographers to shoot in low light with remarkable clarity and reduced noise (though that ability may be diminished with the DS technology) but also offer unparalleled creative control over depth of field, making them versatile tools for a wide range of photographic applications.

Canon's patent application aims to address the challenge of a DS system for a zoom lens while also minimizing vignetting. This breakthrough could open up new creative possibilities for photographers, enabling them to capture stunning images with beautifully blurred backgrounds, regardless of their zoom setting.

While Canon has not officially confirmed its plans regarding these lenses and the DO technology, the patent application offers a tantalizing glimpse into the company's dedication to pushing the boundaries of optical innovation. Without a doubt, such lenses would be both enormous and enormously expensive, so there's no guarantee we'll see any of them make it to the market, but it's neat to see Canon continuing to push boundaries.” 

Tuesday, November 28, 2023

10 Best Lenses of All Time

Try the ChatGPT ‘Make It More’ Trend and Generate Absurd AI Images

Try the ChatGPT ‘Make It More’ Trend and Generate Absurd AI Images

“When you ask ChatGPT's DALL-E to make a picture more of something over and over again, things get weird.

Images generated via a dall-e conversation, asking the bot to make an image of a cup of coffee hotter

AI art generators are in a weird place. They can attempt to make just about anything you can think of, from a dog skateboarding in outer space, to a cup of coffee floating in the ocean. Putting the ethics of AI art aside, some of these creations do not hit the mark on the first go around, and you need to prompt the AI bot with changes to tweak the final results to your liking. 

But what if your end goal isn't to produce a quality piece of AI art? What if your goal is to make something wild.

That's what the "make it more" trend is all about. ChatGPT users are asking DALL-E to generate an image, then once that image pops out, they ask the bot to make it more of something. In this example from Justine Moore, DALL-E was prompted to create a bowl of ramen. After that initial prompt, Moore asked it to make it spicier. It followed suit, mostly by adding a lot of peppers to the mix. She again asked DALL-E to make it spicier. It complied by setting the bowl on fire in what appears to be Pepper Hell™. By the end of the exercise, the bowl of ramen was shooting fire beams into outer space, a truly spicy bowl of noodles. 

There are plenty of examples of this trend online to peruse for your pleasure, from Mashable editor Stan Schroeder's gigantic water bottle experiment, to this body builder getting more and more buff. If you want to try the trend for yourself, however, you should be aware of some constraints. 

How to use the "make it more" trend with ChatGPT

First of all, DALL-E, like other elements of GPT-4, has a limit to the amount of prompts you can issue at any one time. OpenAI isn't super clear when you're about to hit your limit, but just be careful not to get too carried away with your experiments, or else you'll need to wait a few hours to try again. 

Second, DALL-E is finicky with this type of request. I'm not sure if this is something OpenAI adjusted since the trend picked up steam, but I've had trouble getting DALL-E to cooperate with making a piece of art more of something. I tried two different scenarios in particular. First, I asked the bot to generate an image of a dog running through a field. It did. I then asked it to make the dog faster. It complied. I asked it to make the dog faster again, but it rejected me, letting me know that it already made a dog that was fast, and didn't feel the need to make it go faster. 

DALL-E chat with me requesting it to make a faster dog.

Credit: Jake Peterson

I tried the dog trick again, asking it first to create the fastest dog in the world, then asking it to make the dog even faster. DALL-E rejected me again, saying it had already made the fastest dog in the world. Silly me.

I had more luck asking the bot to generate a cup of coffee, then asking it to keep making the coffee hotter. At first, it tapped out after a couple of iterations, but I was finally able to get the bot to generate about five progressively "hotter" cups of coffee. By the time it told me that it couldn't represent heat any differently, the cup looked like it was undergoing the Trinity Test:

Screenshot of a DALL-E conversation, asking it to generate increasingly hotter cups of coffee

Credit: Jake Peterson

I encourage you to try the trend out for yourself in the AI generator of your choice. Just remember: Start small (e.g. "generate a cup of coffee"), then ask it to change it in a simple way ("make the coffee hotter").

Friday, November 24, 2023

Canon 10-20 f4L RF REVIEW: WATCH BEFORE YOU BUY! (vs Canon 11-24)

Apple Adds RCS and OpenAI Explodes!

‘What the heck is going on?’ Extremely high-energy particle detected falling to Earth | Particle physics | The Guardian

‘What the heck is going on?’ Extremely high-energy particle detected falling to Earth

"Amaterasu particle, one of highest-energy cosmic rays ever detected, is coming from an apparently empty region of space

An artist’s impression of the Amataresu particle
Artist’s impression of the Amataresu particle. When ultra-high-energy cosmic rays hit Earth’s atmosphere, they initiate a cascade of secondary particles and electromagnetic radiation. Photograph: Osaka Metropolitan University/Kyoto University/Ryuunosuke Takeshige/PA

Astronomers have detected a rare and extremely high-energy particle falling to Earth that is causing bafflement because it is coming from an apparently empty region of space.

The particle, named Amaterasu after the sun goddess in Japanese mythology, is one of the highest-energy cosmic rays ever detected.

Only the most powerful cosmic events, on scales far exceeding the explosion of a star, are thought to be capable of producing such energetic particles. But Amaterasu appears to have emerged from the Local Void, an empty area of space bordering the Milky Way galaxy.

“You trace its trajectory to its source and there’s nothing high energy enough to have produced it,” said Prof John Matthews, of the University of Utah and a co-author of the paper in the journal Science that describes the discovery. “That’s the mystery of this – what the heck is going on?”

The Amaterasu particle has an energy exceeding 240 exa-electron volts (EeV), millions of times more than particles produced in the Large Hadron Collider, the most powerful accelerator ever built, and equivalent to the energy of a golf ball travelling at 95mph. It comes only second to the Oh-My-God particle, another ultra-high-energy cosmic ray that came in at 320 EeV, detected in 1991.

“Things that people think of as energetic, like supernova, are nowhere near energetic enough for this,” said Matthews. “You need huge amounts of energy, really high magnetic fields, to confine the particle while it gets accelerated.”

Toshihiro Fujii, an associate professor at Osaka Metropolitan University in Japan, said: “When I first discovered this ultra-high-energy cosmic ray, I thought there must have been a mistake, as it showed an energy level unprecedented in the last three decades.”

A potential candidate for this level of energy would be a super-massive black hole at the heart of another galaxy. In the vicinity of these vast entities, matter is stripped back to its subatomic structures and protons, electrons and nuclei are hurled out across the universe at nearly the speed of light.

Cosmic rays, echoes of such violent celestial events, rain down on to Earth nearly constantly and can be detected by instruments, such as the Telescope Array observatory in Utah, which found the Amaterasu particle.

Below a certain energy threshold, the flight path of these particles resembles a ball in a pinball machine as they zigzag against the electromagnetic fields through the cosmic microwave background. But particles with Oh-My-God or Amaterasu-levelenergy would be expected to blast through intergalactic space relatively unbent by galactic and extra-galactic magnetic fields, meaning it should be possible to trace their origin.

Tracing its trajectory backwards points towards empty space. Similarly, the Oh-My-God particle had no discernible source. Scientists suggest this could indicate a much larger magnetic deflection than predicted, an unidentified source in the Local Void, or an incomplete understanding of high-energy particle physics.

“These events seem like they’re coming from completely different places in the sky. It’s not like there’s one mysterious source,” said Prof John Belz of the University of Utah and a co-author of the paper. “It could be defects in the structure of spacetime, colliding cosmic strings. I mean, I’m just spitballing crazy ideas that people are coming up with because there’s not a conventional explanation.”

The Telescope Array is uniquely positioned to detect ultra-high-energy cosmic rays. It sits at about 1,200m (4,000ft), the elevation sweet spot that allows secondary particles maximum development, but before they start to decay. Its location in Utah’s West Desert provides ideal atmospheric conditions in two ways: the dry air is crucial because humidity will absorb the ultraviolet light necessary for detection; and the region’s dark skies are essential, as light pollution will create too much noise and obscure the cosmic rays.

The Telescope Array, which sits at about 1,200 metres elevation in Utah’s West Desert, is in the middle of an expansion that that astronomers hope will help crack the case. Once completed, 500 new scintillator detectors will expand the Telescope Array across 2,900 km2 (1,100 mi2 ), an area nearly the size of Rhode Island and this larger footprint is expected to capture more of these extreme events.

This article was amended on 24 November 2023 to clarify some of the wording, based on agency copy, that was used in an earlier version regarding the speed of particles."

‘What the heck is going on?’ Extremely high-energy particle detected falling to Earth | Particle physics | The Guardian

Wednesday, November 22, 2023

Time Does Not Exist! James Webb Telescope SHOCKS The Entire Space Industry!

Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

“Sam Altman confronted a member over a research paper that discussed the company, while directors disagreed for months about who should fill board vacancies.

Sam Altman, wearing a dark suit, appears on a blue stage.
Sam Altman called an OpenAI board member’s research paper a danger to the company weeks before he was ousted as its chief executive.Justin Sullivan/Getty Images

Before Sam Altman was ousted from OpenAI last week, he and the company’s board of directors had been bickering for more than a year. The tension got worse as OpenAI became a mainstream name thanks to its popular ChatGPT chatbot.

At one point, Mr. Altman, the chief executive, made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

Another member, Ilya Sutskever, thought Mr. Altman was not always being honest when talking with the board. And some board members worried that Mr. Altman was too focused on expansion while they wanted to balance that growth with A.I. safety.

The news that he was being pushed out came in a videoconference on Friday afternoon, when Mr. Sutskever, who had worked closely with Mr. Altman at OpenAI for eight years, read him a statement. The decision stunned OpenAI’s employees and exposed board members to tough questions about their qualifications to manage such a high-profile company.

Those tensions seemingly came to an end late Tuesday when Mr. Altman was reinstated as chief executive. Mr. Sutskever and others critical of Mr. Altman were jettisoned from the board, whose members now include Bret Taylor, an early Facebook officer and former co-chief executive of Salesforce, and Larry Summers, the former Treasury Department secretary. The only holdover is Adam D’Angelo, chief executive of the question-and-answer site, Quora.

The OpenAI debacle has illustrated how building A.I. systems is testing whether businesspeople who want to make money from artificial intelligence can work in sync with researchers who worry that what they are building could eventually eliminate jobs or become a threat if technologies like autonomous weapons grow out of control.

OpenAI was started in 2015 with an ambitious plan to one day create a superintelligent automated system that can do everything a human brain can do. But friction plagued the company’s board, which hadn’t even been able to agree on replacements for members who had stepped down.

Before Mr. Altman’s return, the company’s continued existence was in doubt. Nearly all of OpenAI’s 800 employees had threatened to follow Mr. Altman to Microsoft, which asked him to lead an A.I. lab with Greg Brockman, who quit his roles as OpenAI’s president and board chairman in solidarity with Mr. Altman.

The board had told Mr. Brockman that he would no longer be OpenAI’s chairman but invited him to stay on at the company — though he was not invited to the meeting where the decision was made to push him off the board and Mr. Altman out of the company.

OpenAI’s board troubles can be traced to the start-up’s nonprofit beginnings. In 2015, Mr. Altman teamed with Elon Musk and others, including Mr. Sutskever, to create a nonprofit to build A.I. that was safe and beneficial to humanity. They planned to raise money from private donors for their mission. But within a few years, they realized that their computing needs required much more funding than they could raise from individuals.

After Mr. Musk left in 2018, they created a for-profit subsidiary that began raising billions of dollars from investors, including $1 billion from Microsoft. They said that the subsidiary would be controlled by the nonprofit board and that each director’s fiduciary duty would be to “humanity, not OpenAI investors,” the company said on its website.

Helen Toner, wearing a red blazer over a white top and black pants, stands with her hands folded in front a screen that says “VoxMedia” and “Code.”
Helen Toner, an OpenAI board member, defended the research paper she co-wrote. Matt Winkelmeyer/Getty Images

Among the tensions leading up to Mr. Altman’s ouster and quick return involved his conflict with Helen Toner, a board member and a director of strategy at Georgetown University’s Center for Security and Emerging Technology. A few weeks before Mr. Altman’s firing, he met with Ms. Toner to discuss a paper she had co-written for the Georgetown center.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

But shortly after those discussions, Mr. Sutskever did the unexpected: He sided with board members to oust Mr. Altman, according to two people familiar with the board’s deliberations. The statement he read to Mr. Altman said that Mr. Altman was being fired because he wasn’t “consistently candid in his communications with the board.”

Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form  Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out.

“After a series of reasonably amicable negotiations, the co-founders of Anthropic were able to negotiate their exit on mutually agreeable terms,” an Anthropic spokeswoman, Sally Aldous, said. In a second statement, Anthropic added that there was “no attempt to ‘oust’ Sam Altman at the time the founders of Anthropic left OpenAI.”

Vacancies exacerbated the board’s issues. This year, it disagreed over how to replace three departing directors: Reid Hoffman, the LinkedIn founder and a Microsoft board member; Shivon Zilis, director of operations at Neuralink, a company started by Mr. Musk to implant computer chips in people’s brains; and Will Hurd, a former Republican congressman from Texas.

After vetting four candidates for one position, the remaining directors couldn’t agree on who should fill it, said the two people familiar with the board’s deliberations. The stalemate hardened the divide between Mr. Altman and Mr. Brockman and other board members.

Hours after Mr. Altman was ousted, OpenAI executives confronted the remaining board members during a video call, according to three people who were on the call.

During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.

On Sunday, Mr. Sutskever was urged at OpenAI’s office to reverse course by Mr. Brockman’s wife, Anna, according to two people familiar with the exchange. Hours later, he signed a letter with other employees that demanded the independent directors resign. The confrontation between Mr. Sutskever and Ms. Brockman was reported earlier by The Wall Street Journal.

At 5:15 a.m. on Monday, he posted on X, formerly Twitter, that “I deeply regret my participation in the board’s actions.”

Cade Metz is a technology reporter and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and The World.” He covers artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas. More about Cade Metz

Tripp Mickle reports on Apple and Silicon Valley for The Times and is based in San Francisco. His focus on Apple includes product launches, manufacturing issues and political challenges. He also writes about trends across the tech industry, including layoffs, generative A.I. and robot taxis.  More about Tripp Mickle