Contact Me By Email

Tuesday, November 28, 2023

10 Best Lenses of All Time

Try the ChatGPT ‘Make It More’ Trend and Generate Absurd AI Images

Try the ChatGPT ‘Make It More’ Trend and Generate Absurd AI Images

“When you ask ChatGPT's DALL-E to make a picture more of something over and over again, things get weird.

Images generated via a dall-e conversation, asking the bot to make an image of a cup of coffee hotter

AI art generators are in a weird place. They can attempt to make just about anything you can think of, from a dog skateboarding in outer space, to a cup of coffee floating in the ocean. Putting the ethics of AI art aside, some of these creations do not hit the mark on the first go around, and you need to prompt the AI bot with changes to tweak the final results to your liking. 

But what if your end goal isn't to produce a quality piece of AI art? What if your goal is to make something wild.

That's what the "make it more" trend is all about. ChatGPT users are asking DALL-E to generate an image, then once that image pops out, they ask the bot to make it more of something. In this example from Justine Moore, DALL-E was prompted to create a bowl of ramen. After that initial prompt, Moore asked it to make it spicier. It followed suit, mostly by adding a lot of peppers to the mix. She again asked DALL-E to make it spicier. It complied by setting the bowl on fire in what appears to be Pepper Hell™. By the end of the exercise, the bowl of ramen was shooting fire beams into outer space, a truly spicy bowl of noodles. 

There are plenty of examples of this trend online to peruse for your pleasure, from Mashable editor Stan Schroeder's gigantic water bottle experiment, to this body builder getting more and more buff. If you want to try the trend for yourself, however, you should be aware of some constraints. 

How to use the "make it more" trend with ChatGPT

First of all, DALL-E, like other elements of GPT-4, has a limit to the amount of prompts you can issue at any one time. OpenAI isn't super clear when you're about to hit your limit, but just be careful not to get too carried away with your experiments, or else you'll need to wait a few hours to try again. 

Second, DALL-E is finicky with this type of request. I'm not sure if this is something OpenAI adjusted since the trend picked up steam, but I've had trouble getting DALL-E to cooperate with making a piece of art more of something. I tried two different scenarios in particular. First, I asked the bot to generate an image of a dog running through a field. It did. I then asked it to make the dog faster. It complied. I asked it to make the dog faster again, but it rejected me, letting me know that it already made a dog that was fast, and didn't feel the need to make it go faster. 

DALL-E chat with me requesting it to make a faster dog.

Credit: Jake Peterson

I tried the dog trick again, asking it first to create the fastest dog in the world, then asking it to make the dog even faster. DALL-E rejected me again, saying it had already made the fastest dog in the world. Silly me.

I had more luck asking the bot to generate a cup of coffee, then asking it to keep making the coffee hotter. At first, it tapped out after a couple of iterations, but I was finally able to get the bot to generate about five progressively "hotter" cups of coffee. By the time it told me that it couldn't represent heat any differently, the cup looked like it was undergoing the Trinity Test:

Screenshot of a DALL-E conversation, asking it to generate increasingly hotter cups of coffee

Credit: Jake Peterson

I encourage you to try the trend out for yourself in the AI generator of your choice. Just remember: Start small (e.g. "generate a cup of coffee"), then ask it to change it in a simple way ("make the coffee hotter").

Friday, November 24, 2023

Canon 10-20 f4L RF REVIEW: WATCH BEFORE YOU BUY! (vs Canon 11-24)

Apple Adds RCS and OpenAI Explodes!

‘What the heck is going on?’ Extremely high-energy particle detected falling to Earth | Particle physics | The Guardian

‘What the heck is going on?’ Extremely high-energy particle detected falling to Earth

"Amaterasu particle, one of highest-energy cosmic rays ever detected, is coming from an apparently empty region of space

An artist’s impression of the Amataresu particle
Artist’s impression of the Amataresu particle. When ultra-high-energy cosmic rays hit Earth’s atmosphere, they initiate a cascade of secondary particles and electromagnetic radiation. Photograph: Osaka Metropolitan University/Kyoto University/Ryuunosuke Takeshige/PA

Astronomers have detected a rare and extremely high-energy particle falling to Earth that is causing bafflement because it is coming from an apparently empty region of space.

The particle, named Amaterasu after the sun goddess in Japanese mythology, is one of the highest-energy cosmic rays ever detected.

Only the most powerful cosmic events, on scales far exceeding the explosion of a star, are thought to be capable of producing such energetic particles. But Amaterasu appears to have emerged from the Local Void, an empty area of space bordering the Milky Way galaxy.

“You trace its trajectory to its source and there’s nothing high energy enough to have produced it,” said Prof John Matthews, of the University of Utah and a co-author of the paper in the journal Science that describes the discovery. “That’s the mystery of this – what the heck is going on?”

The Amaterasu particle has an energy exceeding 240 exa-electron volts (EeV), millions of times more than particles produced in the Large Hadron Collider, the most powerful accelerator ever built, and equivalent to the energy of a golf ball travelling at 95mph. It comes only second to the Oh-My-God particle, another ultra-high-energy cosmic ray that came in at 320 EeV, detected in 1991.

“Things that people think of as energetic, like supernova, are nowhere near energetic enough for this,” said Matthews. “You need huge amounts of energy, really high magnetic fields, to confine the particle while it gets accelerated.”

Toshihiro Fujii, an associate professor at Osaka Metropolitan University in Japan, said: “When I first discovered this ultra-high-energy cosmic ray, I thought there must have been a mistake, as it showed an energy level unprecedented in the last three decades.”

A potential candidate for this level of energy would be a super-massive black hole at the heart of another galaxy. In the vicinity of these vast entities, matter is stripped back to its subatomic structures and protons, electrons and nuclei are hurled out across the universe at nearly the speed of light.

Cosmic rays, echoes of such violent celestial events, rain down on to Earth nearly constantly and can be detected by instruments, such as the Telescope Array observatory in Utah, which found the Amaterasu particle.

Below a certain energy threshold, the flight path of these particles resembles a ball in a pinball machine as they zigzag against the electromagnetic fields through the cosmic microwave background. But particles with Oh-My-God or Amaterasu-levelenergy would be expected to blast through intergalactic space relatively unbent by galactic and extra-galactic magnetic fields, meaning it should be possible to trace their origin.

Tracing its trajectory backwards points towards empty space. Similarly, the Oh-My-God particle had no discernible source. Scientists suggest this could indicate a much larger magnetic deflection than predicted, an unidentified source in the Local Void, or an incomplete understanding of high-energy particle physics.

“These events seem like they’re coming from completely different places in the sky. It’s not like there’s one mysterious source,” said Prof John Belz of the University of Utah and a co-author of the paper. “It could be defects in the structure of spacetime, colliding cosmic strings. I mean, I’m just spitballing crazy ideas that people are coming up with because there’s not a conventional explanation.”

The Telescope Array is uniquely positioned to detect ultra-high-energy cosmic rays. It sits at about 1,200m (4,000ft), the elevation sweet spot that allows secondary particles maximum development, but before they start to decay. Its location in Utah’s West Desert provides ideal atmospheric conditions in two ways: the dry air is crucial because humidity will absorb the ultraviolet light necessary for detection; and the region’s dark skies are essential, as light pollution will create too much noise and obscure the cosmic rays.

The Telescope Array, which sits at about 1,200 metres elevation in Utah’s West Desert, is in the middle of an expansion that that astronomers hope will help crack the case. Once completed, 500 new scintillator detectors will expand the Telescope Array across 2,900 km2 (1,100 mi2 ), an area nearly the size of Rhode Island and this larger footprint is expected to capture more of these extreme events.

This article was amended on 24 November 2023 to clarify some of the wording, based on agency copy, that was used in an earlier version regarding the speed of particles."

‘What the heck is going on?’ Extremely high-energy particle detected falling to Earth | Particle physics | The Guardian

Wednesday, November 22, 2023

Time Does Not Exist! James Webb Telescope SHOCKS The Entire Space Industry!

Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

“Sam Altman confronted a member over a research paper that discussed the company, while directors disagreed for months about who should fill board vacancies.

Sam Altman, wearing a dark suit, appears on a blue stage.
Sam Altman called an OpenAI board member’s research paper a danger to the company weeks before he was ousted as its chief executive.Justin Sullivan/Getty Images

Before Sam Altman was ousted from OpenAI last week, he and the company’s board of directors had been bickering for more than a year. The tension got worse as OpenAI became a mainstream name thanks to its popular ChatGPT chatbot.

At one point, Mr. Altman, the chief executive, made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

Another member, Ilya Sutskever, thought Mr. Altman was not always being honest when talking with the board. And some board members worried that Mr. Altman was too focused on expansion while they wanted to balance that growth with A.I. safety.

The news that he was being pushed out came in a videoconference on Friday afternoon, when Mr. Sutskever, who had worked closely with Mr. Altman at OpenAI for eight years, read him a statement. The decision stunned OpenAI’s employees and exposed board members to tough questions about their qualifications to manage such a high-profile company.

Those tensions seemingly came to an end late Tuesday when Mr. Altman was reinstated as chief executive. Mr. Sutskever and others critical of Mr. Altman were jettisoned from the board, whose members now include Bret Taylor, an early Facebook officer and former co-chief executive of Salesforce, and Larry Summers, the former Treasury Department secretary. The only holdover is Adam D’Angelo, chief executive of the question-and-answer site, Quora.

The OpenAI debacle has illustrated how building A.I. systems is testing whether businesspeople who want to make money from artificial intelligence can work in sync with researchers who worry that what they are building could eventually eliminate jobs or become a threat if technologies like autonomous weapons grow out of control.

OpenAI was started in 2015 with an ambitious plan to one day create a superintelligent automated system that can do everything a human brain can do. But friction plagued the company’s board, which hadn’t even been able to agree on replacements for members who had stepped down.

Before Mr. Altman’s return, the company’s continued existence was in doubt. Nearly all of OpenAI’s 800 employees had threatened to follow Mr. Altman to Microsoft, which asked him to lead an A.I. lab with Greg Brockman, who quit his roles as OpenAI’s president and board chairman in solidarity with Mr. Altman.

The board had told Mr. Brockman that he would no longer be OpenAI’s chairman but invited him to stay on at the company — though he was not invited to the meeting where the decision was made to push him off the board and Mr. Altman out of the company.

OpenAI’s board troubles can be traced to the start-up’s nonprofit beginnings. In 2015, Mr. Altman teamed with Elon Musk and others, including Mr. Sutskever, to create a nonprofit to build A.I. that was safe and beneficial to humanity. They planned to raise money from private donors for their mission. But within a few years, they realized that their computing needs required much more funding than they could raise from individuals.

After Mr. Musk left in 2018, they created a for-profit subsidiary that began raising billions of dollars from investors, including $1 billion from Microsoft. They said that the subsidiary would be controlled by the nonprofit board and that each director’s fiduciary duty would be to “humanity, not OpenAI investors,” the company said on its website.

Helen Toner, wearing a red blazer over a white top and black pants, stands with her hands folded in front a screen that says “VoxMedia” and “Code.”
Helen Toner, an OpenAI board member, defended the research paper she co-wrote. Matt Winkelmeyer/Getty Images

Among the tensions leading up to Mr. Altman’s ouster and quick return involved his conflict with Helen Toner, a board member and a director of strategy at Georgetown University’s Center for Security and Emerging Technology. A few weeks before Mr. Altman’s firing, he met with Ms. Toner to discuss a paper she had co-written for the Georgetown center.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

But shortly after those discussions, Mr. Sutskever did the unexpected: He sided with board members to oust Mr. Altman, according to two people familiar with the board’s deliberations. The statement he read to Mr. Altman said that Mr. Altman was being fired because he wasn’t “consistently candid in his communications with the board.”

Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form  Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out.

“After a series of reasonably amicable negotiations, the co-founders of Anthropic were able to negotiate their exit on mutually agreeable terms,” an Anthropic spokeswoman, Sally Aldous, said. In a second statement, Anthropic added that there was “no attempt to ‘oust’ Sam Altman at the time the founders of Anthropic left OpenAI.”

Vacancies exacerbated the board’s issues. This year, it disagreed over how to replace three departing directors: Reid Hoffman, the LinkedIn founder and a Microsoft board member; Shivon Zilis, director of operations at Neuralink, a company started by Mr. Musk to implant computer chips in people’s brains; and Will Hurd, a former Republican congressman from Texas.

After vetting four candidates for one position, the remaining directors couldn’t agree on who should fill it, said the two people familiar with the board’s deliberations. The stalemate hardened the divide between Mr. Altman and Mr. Brockman and other board members.

Hours after Mr. Altman was ousted, OpenAI executives confronted the remaining board members during a video call, according to three people who were on the call.

During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.

On Sunday, Mr. Sutskever was urged at OpenAI’s office to reverse course by Mr. Brockman’s wife, Anna, according to two people familiar with the exchange. Hours later, he signed a letter with other employees that demanded the independent directors resign. The confrontation between Mr. Sutskever and Ms. Brockman was reported earlier by The Wall Street Journal.

At 5:15 a.m. on Monday, he posted on X, formerly Twitter, that “I deeply regret my participation in the board’s actions.”

Cade Metz is a technology reporter and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and The World.” He covers artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas. More about Cade Metz

Tripp Mickle reports on Apple and Silicon Valley for The Times and is based in San Francisco. His focus on Apple includes product launches, manufacturing issues and political challenges. He also writes about trends across the tech industry, including layoffs, generative A.I. and robot taxis.  More about Tripp Mickle

Monday, November 20, 2023

This Is Why Google Paid Billions for Apple to Change a Single Setting

This Is Why Google Paid Billions for Apple to Change a Single Setting

In a black and white photo, a gloved hand takes a silver smartphone from someone’s suit jacket pocket.
Getty Images

“Sign up for the Opinion Today newsletter  Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. 

report in The Guardian in August that lawyers who had had business before the Supreme Court gave money to an aide to Justice Clarence Thomas for a Christmas party was surprising. Just as surprising was the way the publication learned about it: from the aide’s public Venmo records. Brian X. Chen, the consumer technology writer for The Times, wrote that even he was surprised that such records of money transfers could be public.

A few years ago it became known that Alexa, Amazon’s voice device, recorded and sent private conversations to third parties, that Amazon staff members listened to recordings and kept an extensive archive of recordings by default.

Both companies responded to these startling violations of privacy by suggesting that the burden to keep this information from going public was on users, who could, they said, opt out of devices’ default settings to ensure privacy. This is often the standard industry response.

Even if you’re aware of these problems, how easy is it to protect your privacy? Chen helpfully shared instructions for opting out of Venmo’s public disclosures.

“Inside the app, click on the Me tab, tap the settings icon and select Privacy. Under default privacy settings, select Private,” he explained. “Then, under the ‘More’ section in Privacy, click ‘Past Transactions’ and make sure to set that to ‘Change All to Private.’”

Got all that? I did, and changed my settings, too, as I had also been in the dark.

The bigger problem is not the sometimes ridiculous difficulty of opting out, it’s that consumers often aren’t even aware of what their settings allow, or what it all means. If they were truly informed and actively choosing among the available options, the default setting would matter little, and be of little to no value.

But companies expect users to accept what they’re given, not know their options or not have the constant vigilance required to keep track of the available options, however limited they may be. Since the power in the industry is concentrated among few gatekeepers, and the technology is opaque and its consequences hard to foresee, default settings are some of the most important ways for companies to keep collecting and using data as they want.

So, how much are default settings worth?

In April 2021, Apple changed the default settings on iPhones and other devices so that users could not be tracked automatically via a unique identifier assigned to their Apple device. For many companies, and even for entire industries whose business models are based on tracking people online, it was a cataclysmic event. No longer would people have to opt out of such tracking by going into their settings and changing the permissions. Now the apps had to ask for and receive explicit permission before they could have access to that identifier.

In 2021, Snap, Facebook, Twitter and YouTube were estimated to have lost about $10 billion in total because of the change. In early 2022, Meta, Facebook’s parent company, said it alone stood to lose $10 billion. Industries like mobile gaming, in which revenue largely depends on tracking users, also suffered.

Another valuation of default settings became clear in the current Google antitrust trial. During the trial, Google revealed that it paid $26.3 billion in 2021 to be the default search engine on various platforms, with a substantial portion of the money going to Apple. That $26.3 billion was more than a third of the entire 2021 profit of Google’s parent company, Alphabet. That was more than the 2021 revenue of United Airlines and even of many tech companies, including Uber. An expert witness for Google testified that as part of that deal, the company was paying Apple 36 percent of its search advertising revenue to be its products’ default search engine.

Even when you might think you know what your default settings are, you can be surprised. On more than one occasion I discovered that my privacy settings had changed from what I thought they were. Help forums are full of similarly befuddled users. Sometimes it’s a bug. Other times, when I dug into it, I realized that another change I had made had surreptitiously switched me back into tracking. Sometimes I learned that there was yet another setting somewhere else that also needed to be changed.

I’m not a tech novice: I started programming in middle school, worked as a developer and study these systems academically. If professionals can be tripped up, I’d argue that an industry rife with information asymmetries and powerful, complicated technologies needs to be reined in.

Regulators can require companies to have defaults that favor privacy and autonomy, and make it easy to remain in control of them. There are already good efforts underway. California allows people to make a single opt-out or delete request to get all data brokers to delete all their information, rather than having to appeal to them one by one. Colorado also recently passed similar universal one-stop opt-out mechanisms. Other states have made similar privacy protection moves.

I would go further: Data brokers should not be allowed to amass information about people unless they first get explicit permission. But that’s not sufficient, since it is difficult for individuals to evaluate all the implications of their data — professionals, experts and the companies themselves keep getting surprised.

A few years ago, aggregate maps generated by the running app Strava, which showed where users were running, seemingly revealed the location of what could have been a secret Central Intelligence Agency annex in Mogadishu, Somalia. It appears that even the C.I.A. hadn’t anticipated this, and instructed its personnel to change the setting. If that’s the case, what chance do ordinary people have to evaluate all future implications of their data?

There need to be stronger guardrails, including for data that is legitimately collected. The default should be the most restrictive setting, with additional protections. For example, companies should have expiration dates for how long they can hold data needed for a particular service, limiting the data use to that service alone, with explicit consent required for different uses.

The process by which companies get such permissions also needs strong oversight to ensure accountability and transparency. After all, this is the industry that invented “dark patterns”: user interfaces designed to deceive customers into “opting in” to choices without fully realizing what was happening. Many apps have already been trying to get around Apple’s privacy restrictions, by carefully engineering how to get people to opt in or by figuring out other ways to fingerprint devices.

What about all the benefits we derive from services based on personalized data, including even location tracking? I use such tools all the time, but there are certainly ways to provide services and value without this level of unchecked surveillance. But it’s wishful thinking to expect companies to provide those services in a more privacy-preserving manner without regulation that forces them to do so.

I was happy to see Apple switch the defaults for tracking in 2021, but I’m not happy that it was because of a decision by one powerful company — what oligopoly giveth, oligopoly can taketh away. We didn’t elect Apple’s chief executive, Tim Cook, to be the sovereign of our digital world. He could change his mind.

Notably, Apple’s change followed years of intense public criticism of Facebook, after privacy scandals, the election of Donald Trump, Brexit and so on. None of that seemed to have made a substantial dent in Facebook’s business. A single decision by Tim Cook did, clearly demonstrating where real power over this industry lies.

It’s time that our own elected officials got smarter — and prioritized the public interest rather than cozy arrangements with the tech industry — to exercise that power. If the federal government can’t or won’t, states can follow California and Colorado’s lead. In 1966, California forged ahead alone to set high emission standards for cars, which then pulled along the rest of the nation and the industry.

If it were all as simple as people changing their settings, Google wouldn’t be forking over a sum larger than the G.D.P. of entire countries to have Apple users start with one setting rather than another. The default way the technology industry does business needs to change now.

Zeynep Tufekci (@zeynep) is a professor of sociology and public affairs at Princeton University, the author of “Twitter and Tear Gas: The Power and Fragility of Networked Protest” and a New York Times Opinion columnist. @zeynep Facebook

Friday, October 27, 2023

Living In A Country Where The Internet Sucks? STARLINK Unbox / Install ...

The Consequences of Elon Musk’s Ownership of X

The Consequences of Elon Musk’s Ownership of X

When Elon Musk bought Twitter a year ago, he said he wanted to create what he called a “common digital town square.”

“That said,” he wrote, “Twitter obviously cannot become a free-for-all hellscape.”

A year later, according to study after study, Mr. Musk’s platform has become exactly that.

“Now rebranded as X, the site has experienced a surge in racist, antisemitic and other hateful speech. Under Mr. Musk’s watch, millions of people have been exposed to misinformation about climate change. Foreign governments and operatives — from Russia to China to Hamas — have spread divisive propaganda with little or no interference.

Mr. Musk and his team have repeatedly asserted that such concerns are overblown, sometimes pushing back aggressively against people who voice them. Yet dozens of studies from multiple organizations have shown otherwise, demonstrating on issue after issue a similar trend: an increase in harmful content on X during Mr. Musk’s tenure.

The war between Israel and Hamas — the sort of major news event that once made Twitter an essential source of information and debate — has drowned all social media platforms in false and misleading information, but for Mr. Musk’s platform in particular the war has been seen as a watershed. The conflict has captured in full how much the platform has descended into the kind of site that Mr. Musk had promised advertisers he wanted to avoid on the day he officially took over.

“With disinformation about the Israel-Hamas conflict flourishing so dramatically on X, it feels that it crossed a line for a lot of people where they can see — beyond just the branding change — that the old Twitter is truly gone,” Tim Chambers of Dewey Square Group, a public affairs company that tracks social media, said in an interview. “And the new X is a shadow of that former self.”

Reports on X’s role during the Israel-Hamas war

“An epicenter for content praising the attacks.”

The growing sense of chaos on the platform has already hurt Mr. Musk’s investment. While it remains one of the most popular social media services, people visited the website nearly 5.9 billion times in September, down 14 percent from the same month last year, according to the data analysis firm Similarweb.

Advertisers have also fled, leading to a sizable slump in sales. Mr. Musk noted this summer that ad revenue had fallen 50 percent. He blamed the Anti-Defamation League, one of several advocacy groups that have cataloged the rise of hateful speech on X, for “trying to kill this platform.”

Most of the problems, however, stem from changes that Mr. Musk instituted – some intentionally, some not. Studies about the state of X have been conducted over the past year by researchers and analysts at universities, think tanks and advocacy organizations concerned with the spread of hate speech and other harmful content.

Research conducted in part by the Institute for Strategic Dialogue concluded that anti-Semitic tweets in English more than doubled after Mr. Musk’s takeover. A report from the European Commission found that engagement with pro-Kremlin accounts grew 36 percent on the platform in the first half of this year after Mr. Musk lifted mitigation measures.

Mr. Musk disbanded an advisory council focused on trust and safety issues and laid off scores of employees who addressed them. For a monthly fee, he offered users a blue checkmark, a label that once conveyed that Twitter had verified the identity of the user behind an account. He then used algorithms to promote accounts of uncertain provenance in users’ feeds. He removed labels that identified government and state media accounts for countries like Russia and China that censor independent media.

“The entire year’s worth of changes to X were fully stress tested during the global news breaking last week,” Mr. Chambers said, referring to the conflict in Israel. “And in the eyes of many, myself included, it failed utterly.”

The company did not respond to a request for comment beyond a stock response it regularly uses to press inquiries: “Busy now, please check back later.”

X trails only Facebook’s 16.3 billion monthly visits and Instagram’s 6.4 billion visits, according to Similarweb. TikTok, which is rising in popularity among certain demographic groups, has roughly two billion visits each month. Despite voluble threats by disgruntled users to move to alternative platforms – Mastadon, BlueSky or Meta’s new rival to Mr. Musk’s, Threads – none of them have yet reached the critical mass to replicate the public exposure that X offers.

Keeping X at the center of public debate is exactly Mr. Musk’s goal, which he describes at times with a messianic zeal. The day after Hamas attacked Israel, Mr. Musk urged his followers to follow “the war in real time.”

He then cited two accounts that are notorious for spreading disinformation, including a false post in the spring that an explosion had occurred outside the Pentagon. Faced with a flurry of criticism, Mr. Musk deleted the post and later sounded chastened.

He urged his followers on X to “stay as close to the truth as possible, even for stuff you don’t like. This platform aspires to maximize signal/noise of the human collective.”

Reports linking Mr. Musk’s acquisition to hateful content

“A sustained rise in hateful speech.”

Mr. Musk, the prominent, outspoken executive behind Tesla and Space X, had been an avid Twitter user for years before taking it over, promoting his ventures and himself, at times with crude, offensive comments. During the Covid-19 pandemic, he sharply criticized lockdowns and other measures to slow the virus’s spread and began to warn of a “woke” culture that silenced dissent.

Among his first acts as the site’s owner was to reverse the bans on thousands of accounts, including those of users who had promoted the QAnon conspiracy theory and spread disinformation about Covid and the 2020 presidential election.

The impact was instantaneous. Researchers at Tufts, Rutgers and Montclair State universities documented spikes in the use of racial and ethnic slurs soon after Mr. Musk’s acquisition. One research institute found that a campaign on 4chan, a notorious bulletin board, encouraged the use of a particular slur within hours of his arrival, in what seemed to be a coordinated test of the new owner’s tolerance for offensive speech..

The prevalence of such offensive language has, according to numerous studies, continued unabated. “The Musk acquisition saw a sustained rise in hateful speech on the platform,” an article in The Misinformation Review, a peer-reviewed journal published by the Harvard Kennedy School, said in August.

Even worse, the article argued, Mr. Musk’s changes appear to be boosting the engagements of the most contentious users.

A month into Mr. Musk’s ownership, the platform stopped enforcing its policy against Covid-19 misinformation. The liberal watchdog group Media Matterslater identified 250 accounts with high engagement on Covid-related tweets. Nine of the top 10 accounts were known anti-vaccine proponents, several of whom promoted unproven and potentially harmful treatments and attacked top public health officials.

Mr. Musk’s first summer as X’s boss also coincided with a rash of climate-related disasters around the world, including deadly heat waves, rampaging wildfires, torrential rains and intense flooding. Last month, a scorecardevaluating social media companies on their defenses against climate-related falsehoods awarded X a single point out of a possible 21 (Meta, which owns Facebook and Instagram, was given eight points).

How the discussion over climate change changed under Mr. Musk

“Climate denial and hate speech have spiked.”

The platform was “lacking clear policies that address climate misinformation, having no substantive public transparency mechanisms, and offering no evidence of effective policy enforcement,” said the accompanying report from Climate Action Against Disinformation, an international coalition of more than 50 environmental advocacy groups.

This year, hundreds of researchers pushed back against a decision by X to end free access to software that would allow them collect and analyze data about the site.

Perhaps the most impactful change under Mr. Musk has been the evolution of his subscription plans. The blue checkmark that once conveyed veracity and denoted verified accounts, often those of government agencies, companies and prominent users, was now available to any account for $8 a month.

Reports on X bolstering foreign disinformation

“An increase of nearly 70% in Islamic State accounts.”

In April, Mr. Musk began removing the blue badges from verified accounts. New ones impersonating public officials, government agencies and celebrities proliferated, causing confusion about which were real. The platform went on to reward those who paid for their blue labels by amplifying their posts over those without the badge.

Reset, a nonprofit research organization, discovered that dozens of anonymous accounts linked to the Kremlin received the checkmark, pushing Russian narratives on the war in Ukraine. This spring, the platform also removed the labels that identified official state media of countries like Russia, China and Iran. In the 90 days after the change, engagement with posts from the English-language accounts of those outlets soared 70 percent, NewsGuard, a company that tracks online misinformation, reported in September.

Mr. Musk has now run afoul of the European Union’s newly enacted Digital Services Act, a law that requires social media platforms to restrict misinformation and other violative content within the union’s 27 nations.

report commissioned by the union’s executive body warned in August that Mr. Musk’s dismantling of guardrails on the platform had resulted in a 36 percent increase in engagement with Kremlin-linked accounts from January through May, mostly pushing Russia’s justifications for its illegal invasion of Ukraine last year.

After war erupted between Israel and Hamas, Thierry Breton, a European Commissioner who oversees the law’s implementation, warned Mr. Musk in a letter that was posted on X, saying the company needed to address “violent and terrorist content that appears to circulate on your platform.”

Reset, the research organization, reported recently that it had documented 166 posts that its researchers considered antisemitic. Many appeared to violate laws in several European countries, including calls for violence against Jews and denying the historical facts of the Holocaust. They accumulated at least 23 million views and 480,000 engagements.

Mr. Musk sounded incredulous, even as the company scrambled to delete accounts linked to Hamas and other terrorist groups. He responded two days later to an account identified by the Anti-Defamation League as one of the most prominent purveyors of disinformation. The account, which had been removed from Twitter but was restored last December after Mr. Musk took over, had claimed that the European Union was trying to police the truth.

“They still haven’t provided any examples of disinformation,” Mr. Musk replied.“