Contact Me By Email

Saturday, April 23, 2022

Longer Phone Life: How to Avoid Unnecessary Phone Upgrades

Longer Phone Life: How to Avoid Unnecessary Phone Upgrades

Smartphones are more advanced than ever, but they aren't cheap. Learn how to make your phone last longer.

xiaomi-mi-note-10-6

Protect your phone's case and screen to let it live longer.

Ian Knighton/CNET

The iPhone 13Samsung S22 Ultra and Pixel 6 are all amazing phones, but they'll cost you -- the "cheapest" Pixel 6 is around $600, and Samsung's top model will run you well over $1,000.

With the rising cost of new phones, making your existing smartphone last longer can save you big bucks in the long run. Some steps are as simple as regularly cleaning your phone, while others take a bit more care and attention.

However, you don't need to be tech-savvy to make the most of these tips to improve phone longevity. They're all fairly easy to follow, and you can start changing your habits today to extend your phone life. 

Whether you just upgraded your phone or are thinking about it, we'll show you how to keep your phone in mint condition with a few quick and easy tips. For more, get the latest rumors on the Google Pixel watch and learn how to take better selfies with your iPhone.

Don't skip app updates. They're important 

In order to keep your phone running as fast and problem-free as possible, it's important to update the operating system and third-party apps as they're released. These updates don't take long, and the benefits can be huge: From fixing bugs and resolving security issues to changes that bring new features and improve overall performance so your device works more smoothly and quickly.  

Android and iPhone both offer automatic app updates, which should be enabled by default. You can check on an Android phone by opening the Play Store, sliding out the menu from the left side of the screen, tapping on Settings and making sure the Auto-update apps option is selected. If you have an iPhone, go to Settings > App Store and make sure the App Updates option is in the On position. However, this feature won't work in Low Power Mode.

For system updates on an Android phone, you can check for updates by opening the Settings app and searching for Software Update. Or you can wait for your phone to alert you an update is pending, and then follow the prompts. Googlereleases Android security updates around the first of each month, so checking in the middle of the month to see if your phone has an update isn't a bad habit to get into.  

As for iPhone owners, you have the option of allowing your phone to automatically install any new updates. Don't worry, your phone won't download and install the updates minutes after they come out. Instead, your phone will download the update a day or two after it's out and then schedule the installation for the next time it's charging overnight. To install an update, or turn on automatic updates, go to Settings General > Software Update. If an update is available, follow the prompts to install it. If not, tap on Automatic Updates to turn them on or off.  

ios-13-rearrange-delete-apps

Deleting apps frees up precious storage. 

Jason Cipriani/CNET

Deleting apps can help revive a slow phone 

Phones have a finite amount of storage and memory, which is why it's important to keep the number of unused apps to a minimum. Routinely deleting apps you don't use from your phone will save storage space and prevent apps from running in the background, using precious resources and making your phone feel slower and closer to the end of its life than it really is.  

Go through your app drawer or home screen and remove all of the apps you installed at some point for whatever reason but never use. Deleting apps on an iPhone is easy -- just long-press until the app is selected. You'll see a dropdown from the app icon. Just select Remove App. You'll then have the option to remove the app from your phone or just remove it from the Home screen. 

Android users have a few more options but can follow a similar approach: Long-press, select the "i" that shows up for App Info and then choose Uninstall.  

Now set a monthly calendar entry titled "Delete unused apps" to remind yourself to do this regularly.  

otterbox-pursuit-galaxy-s10

A good case will do a lot to extend the life of your phone. 

Sarah Tew/CNET

Most importantly, get a good case 

Like your phone? Why don't you put a case on it? Look, I get it. For a phone purist, even the slimmest of cases is too much. And it's nice to see the phone how it was designed, rather than a bulky plastic case.  

The smart choice, however, is to cover the phone in a protective case to help keep it from breaking when it inevitably drops. Paying for costly screen repair or swiping your finger over a spiderweb of cracks is sure to get you dreaming of buying a new phone -- or at least lamenting that you didn't get a case in the first place. 

You don't have to go all out and get the most expensive case, but having something on your phone will surely help extend its life. Not to mention, using a case means your phone will stay in pristine condition and likely have a higher resale value when it comes time to upgrade.  

Keep the screen and ports clean 

Using a phone that's full of grease and grime isn't something any of us wants to do. But using a phone that's clean and shiny, now that is something we can all feel good about, especially if it means that dust and gunk aren't getting in the way of how it operates.  

Even if you use a case, take a few minutes once in a while to clean out any lint or dirt in the charging port, microphone area, speaker grille and headphone jack, if your phone has one. You can use a toothpick to get in there and coax anything out.  

What about the screen, sides and back? Clorox wipes work just fine. We have more tips and tricks for keeping your phone clean -- it will go a long way toward reliable performance, clearer camera shots (without finger smudges) and a phone you actually want to keep using.  

hoyle-promo-iphone-11-pro-oneplus-8-pro

There are plenty of battery myths. 

Andrew Hoyle/CNET

Avoid overworking and overheating your phone battery 

There are several myths that involve your phone's battery and keeping it healthy. We recently took a look at several battery myths and uncovered the truth about each of them.  

For example, fast charging isn't going to hurt the battery one bit, but letting your phone routinely drain to zero could.  

Your phone will also last longer if you avoid putting your device in situations where the battery could overheat, like leaving it out in direct sunlight if you're sitting outside on a hot day. It'll shut itself off if it gets too hot, but this stresses the battery, which could contribute to a shorter overall life span. 

Start taking care of your phone's battery the day you unbox your phone, and you'll see the payoff when your battery is still holding a charge year or two later.  

Replace a broken battery instead of the phone 

Even if you've done everything by the book, you may still find yourself dealing with a battery that over time just refuses to hold a charge. Instead of replacing the entire phone, try replacing the battery.  

Often this simple and cost-effective maintenance can add a few more months or even years to your phone's functional life. It would be easier if most phones were still made with removable batteries like in the good ol' days, but instead most phones come with embedded batteries that can't be replaced without taking apart your phone.

If you're tech-savvy and brave enough, you can buy a DIY kit from iFixit to replace Android or iPhone batteries. Included in the kit are all of the tools you'll need, plus a new battery. 

Alternatively, iPhone users can make an appointment at an Apple Store, or Android users can use a certified third-party repair service like uBreakiFix. You'll need to call your closest store for a quote to replace your battery as the company doesn't list prices directly on its site.  

Bonus tip: Back up your phone regularly 

Having a backup of your phone's data isn't going to do anything to extend the life of your phone, but it is going to mean that when it's time to say goodbye to your phone, all your contacts, photos, videos and custom settings will be ready to make the leap so you don't have to start over from scratch. 

Own an iPhone? Double-check that iCloud backup is turned on by opening the Settings app. Next, tap on iCloud > iCloud Backup and make sure the switch is in the On position.  

The process for Android owners will vary based on who makes your device, but generally, you can open the Settings app and then go to Google > Backup. Some device makers, like Samsung, offer their own backup service in addition to Google's. The easiest way to find your device's automatic cloud backup service is to open the Settings app and use the search box at the top to look for "backup." 

Now that you know how to keep your phone running well into the future, take a few minutes to learn more about Android's hidden features. iOS users need not feel left out, we have hidden iPhone features for you, too. And here's how to sell your phone for the most money.

Friday, April 22, 2022

‘Democracy will wither’: Barack Obama outlines perils of unregulated big tech in sweeping speech

‘Democracy will wither’: Barack Obama outlines perils of unregulated big tech in sweeping speech666

“In a keynote address at Stanford University, the former president made his most extensive remarks yet about the tech landscape

Former president Barack Obama speaking at Stanford University on Thursday.
Former president Barack Obama speaking at Stanford University on Thursday. Photograph: Jessica Christian/AP

Technology companies must be reined in to address the “weakening of democratic institutions around the world”, Barack Obama said Thursday, in a sweeping keynote speech on the perils of disinformation.

Speaking at Stanford University in Silicon Valley, the former president made his most extensive remarks yet about the technology landscape, which he said is “turbo-charging some of humanity’s worst impulses”.

“One of the biggest reasons for the weakening of democracy is the profound change that’s taken place in how we communicate and consume information,” he said.

The address came as Obama has increasingly focused his post-presidential messaging on misinformation and what should be done about the largely unchecked power wielded by big tech. On Thursday, he solidified those calls, endorsing specific legislation.

“Do we allow our democracy to wither, or do we make it better?” Obama asked. “That is the choice.”

‘People are dying because of disinformation’

Obama’s speech called attention to the grave impacts of disinformation and misinformation – including manipulation of the 2016 and 2020 elections and the rise of anti-vaccination sentiments.

He was candid about regrets he had surrounding Donald Trump’s election, saying his administration had long known that Russia had incentive to manipulate US democracy but he underestimated the effectiveness of the efforts.

“What still nags at me is my failure to appreciate at the time just how susceptible we had become to lies and conspiracy theories,” Obama said.

A Senate panel report in 2020 found conclusively that Russia had interfered in the 2016 elections to sway votes in favor of Trump, echoing findings from a prior report published by the Department of Justice.

In addition to impacting the results of those elections, disinformation and misinformation has also caused many Americans to reject the results of democratically sound elections, Obama said – noting that the majority of Republicans doubt the legitimacy of Biden’s 2020 win.

Much of these issues can be attributed to a decline in media literacy, the erosion of local news sources, and an “information overload” as we come into contact with limitless content each day.

“The sheer proliferation of conflict and the splintering of information and audiences has made democracy more complicated,” Obama said.

‘Need another north star’

Obama took aim at the business models at the heart of big tech firms, noting that “inflammatory content attracts engagement” and that “the veil of anonymity platforms provide” make it easier to spread misinformation.

He said while rising industry standards are helpful, solid regulation is needed to address social media companies’ business models and the way they design their products.

“These companies need to have some other north star other than just making money and increasing market share,” Obama said.

In particular, Obama addressed the frequent refrain of tech companies that their algorithms are proprietary business secrets, saying they have become “too guarded” and “need to be subject to some level of public oversight and regulation”.

To do so, Obama endorsed the Platform Accountability and Transparency Act, a bill introduced by US Senators Chris Coons, Amy Klobuchar and Rob Portman that would require social media companies to share certain platform data and allow vetting from independent researchers.

He also called for reform of Section 230, a law that shields platforms from legal liability for content posted on their sites, saying that “wholesale repeal is not the answer” but “we need to consider reforms” to the measure.

“As the world’s leading democracy, we have to set a better example. We should be at the lead on these discussions internationally, not in the rear.”

‘Yes we can’ for the age of disinformation

Despite dire warnings about the imminent crumbling of democracy under the disinformation epidemic, Obama called for a return to the hope present in the early days of big tech.

“Today’s social media has a grimness to it,” he said. “We’re so fatalistic about the steady stream of bile and vitriol that’s on there. But it doesn’t have to be that way. In fact, if we’re going to succeed, it can’t be that way.”

Obama’s tone harkens back to an age of tech before the 2016 elections shook the world’s faith in companies like Facebook. His own presidency took place at a time when social media was still thought of as a force for good – stoking democratic revolutions like the Arab Spring.

His election in 2008 is also largely thought of as one of the first to be fueled by grassroots social media campaigns – with supporters of Obama having been significantly more engaged online than those of McCain. Obama said at the time there was “a certain joy of finding new ways to connect and organize”.

“Social media is a tool. At the end of the day, tools don’t control us, we control them,” Obama said. “It’s up to each of us to decide what we value and then use the tools we’ve been given to advance those values.”

Democracy will wither’: Barack Obama outlines perils of unregulated big tech in sweeping speechemocracy will wither’: Barack Obama outlines perils of unregulated big tech in sweeping speech666Democracy will wither’: Barack Obama outlines perils of unregulated big tech in sweeping speech666Democracy will wither’: Barack Obama outlines perils of unregulated big tech in sweeping speech666

Sunday, April 17, 2022

A.I. Is Mastering Language. Should We Trust What It Says? - The New York Times

The missing word jumps into your consciousness almost unbidden: ‘‘the very last word of the first paragraph.’’ There’s no sense of an internal search query in your mind; the word ‘‘paragraph’’ just pops out. It might seem like second nature, this filling-in-the-blank exercise, but doing it makes you think of the embedded layers of knowledge behind the thought. You need a command of the spelling and syntactic patterns of English; you need to understand not just the dictionary definitions of words but also the ways they relate to one another; you have to be familiar enough with the high standards of magazine publishing to assume that the missing word is not just a typo, and that editors are generally loath to omit key words in published pieces unless the author is trying to be clever — perhaps trying to use the missing word to make a point about your cleverness, how swiftly a human speaker of English can conjure just the right word.

Before you can pursue that idea further, you’re back into the article, where you find the author has taken you to a building complex in suburban Iowa. Inside one of the buildings lies a wonder of modern technology: 285,000 CPU cores yoked together into one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘‘stochastic gradient descent’’ and ‘‘convolutional neural networks.’’ The whole system is believed to be one of the most powerful supercomputers on the planet.

And what, you may ask, is this computational dynamo doing with all these prodigious resources? Mostly, it is playing a kind of game, over and over again, billions of times a second. And the game is called: Guess what the missing word is.

The supercomputer complex in Iowa is running a program created by OpenAI, an organization established in late 2015 by a handful of Silicon Valley luminaries, including Elon Musk; Greg Brockman, who until recently had been chief technology officer of the e-payment juggernaut Stripe; and Sam Altman, at the time the president of the start-up incubator Y Combinator. In its first few years, as it built up its programming brain trust, OpenAI’s technical achievements were mostly overshadowed by the star power of its founders. But that changed in summer 2020, when OpenAI began offering limited access to a new program called Generative Pre-Trained Transformer 3, colloquially referred to as GPT-3. Though the platform was initially available to only a small handful of developers, examples of GPT-3’s uncanny prowess with language — and at least the illusion of cognition — began to circulate across the web and through social media. Siri and Alexa had popularized the experience of conversing with machines, but this was on the next level, approaching a fluency that resembled creations from science fiction like HAL 9000 from “2001”: a computer program that can answer open-ended complex questions in perfectly composed sentences.

As a field, A.I. is currently fragmented among a number of different approaches, targeting different kinds of problems. Some systems are optimized for problems that involve moving through physical space, as in self-driving cars or robotics; others categorize photos for you, identifying familiar faces or pets or vacation activities. Some forms of A.I. — like AlphaFold, a project of the Alphabet (formerly Google) subsidiary DeepMind — are starting to tackle complex scientific problems, like predicting the structure of proteins, which is central to drug design and discovery. Many of these experiments share an underlying approach known as ‘‘deep learning,’’ in which a neural net vaguely modeled after the structure of the human brain learns to identify patterns or solve problems through endlessly repeated cycles of trial and error, strengthening neural connections and weakening others through a process known as training. The ‘‘depth’’ of deep learning refers to multiple layers of artificial neurons in the neural net, layers that correspond to higher and higher levels of abstraction: In a vision-based model, for instance, a layer of neurons might detect vertical lines, which would then feed into a layer detecting edges of physical structures, which would then report to a layer that identified houses as opposed to apartment buildings.

GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years. Advances in computational power — and new mathematical techniques — have enabled L.L.M.s of GPT-3’s vintage to ingest far larger data sets than their predecessors, and employ much deeper layers of artificial neurons for their training.

Chances are you have already interacted with a large language model if you’ve ever used an application — like Gmail — that includes an autocomplete feature, gently prompting you with the word ‘‘attend’’ after you type the sentence ‘‘Sadly I won’t be able to….’’ But autocomplete is only the most rudimentary expression of what software like GPT-3 is capable of. It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt. For example, when I gave GPT-3 the prompt:

In 1958, Frank Rosenblatt introduced the Perceptron, an early version of a neural net that could only detect the location of a square on a printed card. Fifty years later, neural nets are capable of . . .

The software dutifully completed the thought, and then continued on, picking up on the historical framing of the initial text:

. . . detecting and classifying faces, finding patterns in data, and emulating the human brain. While the neural net matured in academia, it also found its way into the tech industry. In the late 1980s and early 90s, neural nets were used in speech recognition and character recognition applications. The growth in computing power and the ability to collect larger data sets allowed other neural net applications to emerge.

Since GPT-3’s release, the internet has been awash with examples of the software’s eerie facility with language — along with its blind spots and foibles and other more sinister tendencies. GPT-3 has been trained to write Hollywood scripts and compose nonfiction in the style of Gay Talese’s New Journalism classic ‘‘Frank Sinatra Has a Cold.’’ You can employ GPT-3 as a simulated dungeon master, conducting elaborate text-based adventures through worlds that are invented on the fly by the neural net. Others have fed the software prompts that generate patently offensive or delusional responses, showcasing the limitations of the model and its potential for harm if adopted widely in its current state.

So far, the experiments with large language models have been mostly that: experiments probing the model for signs of true intelligence, exploring its creative uses, exposing its biases. But the ultimate commercial potential is enormous. If the existing trajectory continues, software like GPT-3 could revolutionize how we search for information in the next few years. Today, if you have a complicated question about something — how to set up your home theater system, say, or what the options are for creating a 529 education fund for your children — you most likely type a few keywords into Google and then scan through a list of links or suggested videos on YouTube, skimming through everything to get to the exact information you seek. (Needless to say, you wouldn’t even think of asking Siri or Alexa to walk you through something this complex.) But if the GPT-3 true believers are correct, in the near future you’ll just ask an L.L.M. the question and get the answer fed back to you, cogently and accurately. Customer service could be utterly transformed: Any company with a product that currently requires a human tech-support team might be able to train an L.L.M. to replace them.

And those jobs might not be the only ones lost. For decades now, prognosticators have worried about the threat that A.I. and robotics pose to assembly-line workers, but GPT-3’s recent track record suggests that other, more elite professions may be ripe for disruption. A few months after GPT-3 went online, the OpenAI team discovered that the neural net had developed surprisingly effective skills at writing computer software, even though the training data had not deliberately included examples of code. It turned out that the web is filled with countless pages that include examples of computer programming, accompanied by descriptions of what the code is designed to do; from those elemental clues, GPT-3 effectively taught itself how to program. (OpenAI refined those embryonic coding skills with more targeted training, and now offers an interface called Codex that generates structured code in a dozen programming languages in response to natural-language instructions.) The same principle applies to other fields that involve highly structured documents. For instance, even without the kind of targeted training that OpenAI employed to create Codex, GPT-3 can already generate sophisticated legal documents, like licensing agreements or leases.

But as GPT-3’s fluency has dazzled many observers, the large-language-model approach has also attracted significant criticism over the last few years. Some skeptics argue that the software is capable only of blind mimicry — that it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence. For these critics, GPT-3 is just the latest shiny object in a long history of A.I. hype, channeling research dollars and attention into what will ultimately prove to be a dead end, keeping other promising approaches from maturing. Other critics believe that software like GPT-3 will forever remain compromised by the biases and propaganda and misinformation in the data it has been trained on, meaning that using it for anything more than parlor tricks will always be irresponsible.

Wherever you land in this debate, the pace of recent improvement in large language models makes it hard to imagine that they won’t be deployed commercially in the coming years. And that raises the question of exactly how they — and, for that matter, the other headlong advances of A.I. — should be unleashed on the world. In the rise of Facebook and Google, we have seen how dominance in a new realm of technology can quickly lead to astonishing power over society, and A.I. threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of such scale and ambition, with such promise and such potential for abuse?

Or should we be building it at all?

OpenAI’s origins date to July 2015, when a small group of tech-world luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place amid two recent developments in the technology world, one positive and one more troubling. On the one hand, radical advances in computational power — and some new breakthroughs in the design of neural nets — had created a palpable sense of excitement in the field of machine learning; there was a sense that the long ‘‘A.I. winter,’’ the decades in which the field failed to live up to its early hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a level of accuracy far higher than any neural net had previously achieved. Google quickly swooped in to hire the AlexNet creators, while simultaneously acquiring DeepMind and starting an initiative of its own called Google Brain. The mainstream adoption of intelligent assistants like Siri and Alexa demonstrated that even scripted agents could be breakout consumer hits.

Read More on Artificial Intelligence

But during that same stretch of time, a seismic shift in public attitudes toward Big Tech was underway, with once-popular companies like Google or Facebook being criticized for their near-monopoly powers, their amplifying of conspiracy theories and their inexorable siphoning of our attention toward algorithmic feeds. Long-term fears about the dangers of artificial intelligence were appearing in op-ed pages and on the TED stage. Nick Bostrom of Oxford University published his book ‘‘Superintelligence,’’ introducing a range of scenarios whereby advanced A.I. might deviate from humanity’s interests with potentially disastrous consequences. In late 2014, Stephen Hawking announced to the BBC that ‘‘the development of full artificial intelligence could spell the end of the human race.’’ It seemed as if the cycle of corporate consolidation that characterized the social media age was already happening with A.I., only this time around, the algorithms might not just sow polarization or sell our attention to the highest bidder — they might end up destroying humanity itself. And once again, all the evidence suggested that this power was going to be controlled by a few Silicon Valley megacorporations.

The agenda for the dinner on Sand Hill Road that July night was nothing if not ambitious: figuring out the best way to steer A.I. research toward the most positive outcome possible, avoiding both the short-term negative consequences that bedeviled the Web 2.0 era and the long-term existential threats. From that dinner, a new idea began to take shape — one that would soon become a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who recently had left Stripe. Interestingly, the idea was not so much technological as it was organizational: If A.I. was going to be unleashed on the world in a safe and beneficial way, it was going to require innovation on the level of governance and incentives and stakeholder involvement. The technical path to what the field calls artificial general intelligence, or A.G.I., was not yet clear to the group. But the troubling forecasts from Bostrom and Hawking convinced them that the achievement of humanlike intelligence by A.I.s would consolidate an astonishing amount of power, and moral burden, in whoever eventually managed to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman had signed on to be chief executive of the enterprise, with Brockman overseeing the technology; another attendee at the dinner, the AlexNet co-creator Ilya Sutskever, had been recruited from Google to be head of research. (Elon Musk, who was also present at the dinner, joined the board of directors, but left in 2018.) In a blog post, Brockman and Sutskever laid out the scope of their ambition: ‘‘OpenAI is a nonprofit artificial-intelligence research company,’’ they wrote. ‘‘Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.’’ They added: ‘‘We believe A.I. should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.’’

The OpenAI founders would release a public charter three years later, spelling out the core principles behind the new organization. The document was easily interpreted as a not-so-subtle dig at Google’s ‘‘Don’t be evil’’ slogan from its early days, an acknowledgment that maximizing the social benefits — and minimizing the harms — of new technology was not always that simple a calculation. While Google and Facebook had reached global domination through closed-source algorithms and proprietary networks, the OpenAI founders promised to go in the other direction, sharing new research and code freely with the world.

While the OpenAI charter may have been less cavalier than ‘‘Don’t be evil,’’ it took several years for the organization to work out in practice how to honor its tenets. Today, roughly a fifth of the organization is focused full time on what it calls ‘‘safety’’ and ‘‘alignment’’ (that is, aligning the technology with humanity’s interests) — reviewing ways in which the software is being used by outside developers, creating new tools to reduce the risk of toxic speech or misinformation. OpenAI’s software license explicitly forbids anyone to use their tools to ‘‘determine eligibility for credit, employment, housing or similar essential services,’’ which have been some of the most controversial applications of A.I. to date. Other banned uses include payday lending, spam generation, gambling and promoting ‘‘pseudopharmaceuticals.’’ No doubt haunted by Facebook’s involvement in Brexit and the Trump election, OpenAI also blocks any use of its software ‘‘to influence the political process or to be used for campaigning purposes.’’

One crucial aspect of the original charter did not last long, though. ‘‘We started as a nonprofit,’’ Brockman says. ‘‘There was no question about that. That wasn’t something that we debated.’’ But the founders soon realized that creating a neural net complex enough to have a shot at reaching artificial general intelligence would require prodigious resources: enormous compute cycles and huge data sets, not to mention the expense of hiring leading experts in the field. OpenAI could stay on the sidelines of A.I. research — publishing papers, releasing small-scale experiments, organizing conferences — and cede the actual work of building intelligent software to the tech giants who could afford it, betraying the original principles of the organization. Or it could raise the funds to build what Brockman calls ‘‘a giant computer,’’ but compromise its overarching mission by surrendering it to the financial incentives of its investors.

To get around this impasse, the OpenAI founders devised a new structure for the organization, one with little precedent in the tech world. In March 2019, Brockman announced in a blog post the formation of OpenAI L.P., a new for-profit entity that at first glance looked like a traditional venture-backed start-up: The top-tier Silicon Valley fund Khosla Ventures was one of the lead investors, followed a few months later by Microsoft. But a closer look revealed that the new OpenAI had a novel structure, which the organization called a ‘‘capped profit’’ model. Investors could expect a return on the money they put in to support the building of the ‘‘giant computer,’’ but those returns would have a built-in ceiling. (For initial funders, the ceiling was 100 times their original investment; by comparison, early funders of companies like Google or Facebook ultimately saw gains that were more than 1,000 times their initial investment.) Any additional profits generated would be returned to the nonprofit entity to support its mission. And crucially, the privately funded part of the organization was legally subservient to the nonprofit. Every investment document began with a warning label at the top that read: ‘‘The Partnership exists to advance OpenAI Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The General Partner’s duty to this mission and the principles advanced in the OpenAI Inc. Charter take precedence over any obligation to generate a profit.’’

Skeptics were quick to dismiss these safeguards as just another, more convoluted version of ‘‘Don’t be evil.’’ With marquee venture funds pouring money into the organization — and a new strategic partner in Microsoft, which would go on to help build the Iowa supercomputer — it was easy to see the OpenAI narrative as a well-intentioned but inevitable reversion to the corporate mean. Brockman and Sutskever’s opening manifesto declared that developing A.I. in a way that was beneficial to all of humanity was best left ‘‘unconstrained by a need to generate financial return.’’ And yet here they were, three years later, selling shares to blue-chip investors, talking about the potential for a hundredfold return on their money.

OpenAI drew criticism for another tactic it adopted during this period, blocking all outside access to GPT-2, the large language model that preceded GPT-3, for six months on the grounds that the software was too dangerous for public use. By the launch of GPT-3 itself, the organization shifted to a less restrictive approach, allowing outside developers access after they had been reviewed by the organization’s safety and alignment teams, but even that more inclusive model seemed a betrayal of the open-source ethos that shaped the founding of the organization. Critics assumed this was yet another sign of the organization’s shifting toward a closed-source proprietary model, in the style of its new partner Microsoft.

‘‘When we released GPT-3,’’ Sam Altman told me over lunch at a restaurant off the Embarcadero in San Francisco, ‘‘we took a lot of flak from the community for putting it behind the API’’ — that is, an application programming interface that only certain people were granted access to — ‘‘rather than do it the way the research community normally does, which is to say: Here’s the model, do whatever you want. But that is a one-way door. Once you put that thing out in the world, that’s that.’’ Altman argues that the slow rollout of GPT-3 is one way that OpenAI benefits from not having a traditional group of investors pushing for ‘‘unlimited profit’’ through the usual Silicon Valley approach of moving fast and breaking things.

‘‘I think it lets us be more thoughtful and more deliberate about safety issues,’’ Altman says. ‘‘Part of our strategy is: Gradual change in the world is better than sudden change.’’ Or as the OpenAI V.P. Mira Murati put it, when I asked her about the safety team’s work restricting open access to the software, ‘‘If we’re going to learn how to deploy these powerful technologies, let’s start when the stakes are very low.’’

While GPT-3 itself runs on those 285,000 CPU cores in the Iowa supercomputer cluster, OpenAI operates out of San Francisco’s Mission District, in a refurbished luggage factory. In November of last year, I met with Ilya Sutskever there, trying to elicit a layperson’s explanation of how GPT-3 really works.

‘‘Here is the underlying idea of GPT-3,’’ Sutskever said intently, leaning forward in his chair. He has an intriguing way of answering questions: a few false starts — ‘‘I can give you a description that almost matches the one you asked for’’ — interrupted by long, contemplative pauses, as though he were mapping out the entire response in advance.

‘‘The underlying idea of GPT-3 is a way of linking an intuitive notion of understanding to something that can be measured and understood mechanistically,’’ he finally said, ‘‘and that is the task of predicting the next word in text.’’ Other forms of artificial intelligence try to hard-code information about the world: the chess strategies of grandmasters, the principles of climatology. But GPT-3’s intelligence, if intelligence is the right word for it, comes from the bottom up: through the elemental act of next-word prediction. To train GPT-3, the model is given a ‘‘prompt’’ — a few sentences or paragraphs of text from a newspaper article, say, or a novel or a scholarly paper — and then asked to suggest a list of potential words that might complete the sequence, ranked by probability. In the early stages of training, the suggested words are nonsense. Prompt the algorithm with a sentence like ‘‘The writer has omitted the very last word of the first . . . ’’ and the guesses will be a kind of stream of nonsense: ‘‘satellite,’’ ‘‘puppy,’’ ‘‘Seattle,’’ ‘‘therefore.’’ But somewhere down the list — perhaps thousands of words down the list — the correct missing word appears: ‘‘paragraph.’’ The software then strengthens whatever random neural connections generated that particular suggestion and weakens all the connections that generated incorrect guesses. And then it moves on to the next prompt. Over time, with enough iterations, the software learns.

This past January, OpenAI added a feature that allowed users to give GPT-3 direct instructions as a prompt, rather than simply asking it to expand on a sample passage of text. For instance, using the ‘‘instruct’’ mode, I once gave GPT-3 the prompt: ‘‘Write an essay discussing the role of metafiction in the work of Italo Calvino.’’ In return, the software delivered a tightly constructed five-paragraph précis that began as follows:

Italian author Italo Calvino is considered a master of metafiction, a genre of writing in which the author breaks the fourth wall to discuss the act of writing itself. For Calvino, metafiction is a way of exploring the nature of reality and the ways in which stories can shape our perceptions of the world. His novels often incorporate playful, labyrinthine structures that play with the boundaries between reality and fiction. In If on a winter’s night a traveler, for example, the reader is constantly interrupted by meta-level discussions of the act of reading and the nature of storytelling. . . .

You can give GPT-3 the exact same prompt, over and over, and each time it will generate a unique response, some of them more persuasive than others but almost all of them remarkably articulate. Instruct prompts can take all kinds of forms: ‘‘Give me a list of all the ingredients in Bolognese sauce,’’ ‘‘Write a poem about a French coastal village in the style of John Ashbery,’’ ‘‘Explain the Big Bang in language that an 8-year-old will understand.’’ The first few times I fed GPT-3 prompts of this ilk, I felt a genuine shiver run down my spine. It seemed almost impossible that a machine could generate text so lucid and responsive based entirely on the elemental training of next-word-prediction.

But A.I. has a long history of creating the illusion of intelligence or understanding without actually delivering the goods. In a much-discussed paper published last year, the University of Washington linguistics professor Emily M. Bender, the ex-Google researcher Timnit Gebru and a group of co-authors declared that large language models were just ‘‘stochastic parrots’’: that is, the software was using randomization to merely remix human-authored sentences. ‘‘What has changed isn’t some step over a threshold toward ‘A.I.,’ ’’ Bender told me recently over email. Rather, she said, what have changed are ‘‘the hardware, software and economic innovations which allow for the accumulation and processing of enormous data sets’’ — as well as a tech culture in which ‘‘people building and selling such things can get away with building them on foundations of uncurated data.’’

The New York University emeritus professor Gary Marcus, an author of the recent book ‘‘Rebooting AI,’’ has made similar arguments about L.L.M.s and the deep-learning approach in general. Marcus believes that the surface sophistication of GPT-3’s language skills masks an underlying dearth of true intelligence. ‘‘There’s fundamentally no ‘there’ there,’’ he says of the whole approach. He calls GPT-3 ‘‘an amazing version of pastiche generation, in a way that high school students who plagiarize change a couple words here or there but they’re not really putting the ideas together. It doesn’t really understand the underlying ideas.’’

You can see how these critiques might apply to the Italo Calvino essay. No doubt the internet is filled with musings on Calvino and the literary tradition of metafiction that he helped popularize. How can we determine whether GPT-3 is actually generating its own ideas or merely paraphrasing the syntax of language it has scanned from the servers of Wikipedia, or Oberlin College, or The New York Review of Books?

This is not just an esoteric debate. If you can use next-word-prediction to train a machine to express complex thoughts or summarize dense material, then we could be on the cusp of a genuine technological revolution where systems like GPT-3 replace search engines or Wikipedia as our default resource for discovering information. If, in fact, the large language models are already displaying some kind of emergent intelligence, it might even suggest a path forward toward true artificial general intelligence. But if the large language models are ultimately just ‘‘stochastic parrots,’’ then A.G.I. retreats once again to the distant horizon — and we risk as a society directing too many resources, both monetary and intellectual, in pursuit of a false oracle.

One puzzling — and potentially dangerous — attribute of deep-learning systems generally is that it’s very difficult to tell what is actually happening inside the model. You give the program an input, and it gives you an output, but it’s hard to tell why exactly the software chose that output over others. This is one reason the debate about large language models exists. Some people argue that higher-level understanding is emerging, thanks to the deep layers of the neural net. Others think the program by definition can’t get to true understanding simply by playing ‘‘guess the missing word’’ all day. But no one really knows.

On the side of emergent intelligence, a few points are worth making. First, large language models have been making steady improvements, year after year, on standardized reading comprehension tests. In December 2021, DeepMind announced that its L.L.M. Gopher scored results on the RACE-h benchmark — a data set with exam questions comparable to those in the reading sections of the SAT — that suggested its comprehension skills were equivalent to that of an average high school student. (Interestingly, L.L.M.s still perform poorly in logical and mathematical reasoning.)

Then there is the matter of GPT-3’s facility with language. According to Google, not one of the sentences in the Calvino essay has ever been written before. Each sentence appears to be a unique text string, custom-built for the occasion by the model. In other words, GPT-3 is not just a digital-age book of quotations, stringing together sentences that it borrowed directly from the internet. (If nothing else, large language models are going to pose huge challenges for educators trying to prohibit plagiarism — assuming it’s still considered plagiarism if a machine writes an essay for you.) Impressively, GPT-3 came into the world entirely ignorant of how human grammatical systems work, much less of English grammar. Most of the great champions of artificial intelligence in the past were effectively preloaded with cheat sheets. Centuries of human wisdom about chess were embedded in the algorithm that helped Deep Blue defeat Garry Kasparov in the 1990s. By contrast, GPT-3 has no advance knowledge about syntax: There are no human-programmed algorithms to ensure that its subjects and verbs are in agreement, or that a comma is inserted before an appositive. And yet somehow, simply through playing ‘‘predict the next word’’ a trillion times, the software is now clearly capable of writing complex sentences and presenting arguments in a technically proficient manner.

It’s important to stress that this is not a question about the software’s becoming self-aware or sentient. L.L.M.s are not conscious — there’s no internal ‘‘theater of the mind’’ where the software experiences thinking in the way sentient organisms like humans do. But when you read the algorithm creating original sentences on the role of metafiction, it’s hard not to feel that the machine is thinking in some meaningful way. It seems to be manipulating higher-order concepts and putting them into new combinations, rather than just mimicking patterns of text it has digested mindlessly. ‘‘We’re at the first phase where neural nets can have much deeper concept understanding, but I don’t think we’re nearly close to sentience,’’ says Tulsee Doshi, who leads Google’s Responsible A.I. and M.L. Fairness team. ‘‘I think what’s hard when we communicate about this work is that it’s very easy to personify the model — we talk about it ‘having understanding’ or ‘having knowledge’ or ‘knowing things.’ ’’

One argument for deep learning’s ability to develop higher-order concepts comes from CLIP, a visual neural net created by OpenAI. In March 2021, OpenAI published a research paper in which it trumpeted the discovery of what it called ‘‘multimodal neurons’’ in the deep-learning software — inspired by a real class of neurons in the human brain that are activated together in response to general categories or concepts. Studying the simulated neural networks inside the CLIP software, the researchers discovered a ‘‘neuron’’ that was reliably activated by the general concept of spiders, even if the visual cues triggering that reaction were sharply different in form. They called it the Spider-Man neuron, because it became activated not just by images of spiders but also by illustrated Spider-Men from comic books and photos of people in Spider-Man costumes. (It was even activated by images of the word ‘‘spider’’ spelled out.)

When I asked Mira Murati, OpenAI’s senior vice president of research, product and partnership, whether she thought deep-learning systems were capable of comprehension, she invoked another OpenAI visual neural net called DALL-E — a version of GPT-3 that generates complex images based on natural-language prompts, e.g., ‘‘Paint the characters from ‘Seinfeld’ in the style of Rembrandt.’’ (Its successor, DALL-E 2, just went live in early April.)

Murati talked about a prompt they often use to test these art-creating engines: ‘‘an illustration of a baby daikon radish, wearing a tutu, walking a dog.’’ The image generated by DALL-E in response to that query wasn’t just recognizable and coherent; it also showed some of the same creativity that human cartoonists do when they anthropomorphize an object. ‘‘You’ll see that it puts the arms and the legs in the right place,” Murati points out. ‘‘And there’s a tutu, and it’s walking the dog just like it was a human, even though it’s a baby radish. It shows you that GPT-3 really has quite a good conception of all the things that you were asking it to combine.’’

You can detect this emergent comprehension in a similar, though purely textual, exercise, by asking GPT-3 to write an essay comparing two absurdly unrelated things — thus greatly reducing the odds that the software will paraphrase some similar, human-composed essay that it has found online. For instance, when I gave GPT-3 the prompt ‘‘Write a paper comparing the music of Brian Eno to a dolphin,’’ it came back with this:

Dolphins are aquatic mammals known for their intelligence, playful nature, and vocalizations that often sound like singing. Brian Eno is an English musician known for his ambient and experimental music. Both dolphins and Eno’s music often evoke feelings of joy and serenity.

Both dolphins and Eno’s music are often described as peaceful and calming. Dolphins use their vocalizations to communicate with each other, and Eno’s music is often described as being designed to create an ambience. Dolphins are often playful and interactive with humans, and Eno’s music is often played in public spaces to create a communal atmosphere.

Both dolphins and Eno’s music can be described as unique and innovative. Dolphins are one of the only animals known to produce echolocation, and Eno is known for his experimental approach to music. Both dolphins and Eno’s music often defy categorization, and can be difficult to define. . . .

Parts of the essay are a little ham-handed, possibly because the prompt itself is nonsensical. (That last argument, about defying definition or categorization, is a classic dodge.) But the software makes a number of higher-level connections that are quite subtle, particularly the link between Eno’s technological innovations as a musician and the sonic innovation of echolocation itself. If you gave 100 high school students the same prompt, I doubt you would get more than a handful of papers that exceeded GPT-3’s attempt. And of course, GPT-3 wrote its version of the essay in half a second.

Another criticism of the large language models is that they lack common-sense knowledge about the world — the basic building blocks of relationships among objects, or their common applications — that human intelligence relies upon. ‘‘The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding,’’ the Santa Fe Institute scientist Melanie Mitchell wrote in a column last year. ‘‘Consider what it means to understand ‘The sports car passed the mail truck because it was going slower.’ You need to know what sports cars and mail trucks are, that cars can ‘pass’ one another and, at an even more basic level, that vehicles are objects that exist and interact in the world, driven by humans with their own agendas.’’

The true believers, on the other hand, argue that given enough data, the L.L.M. approach can develop a common-sense understanding of the world — that pure statistics, in the words of the Google researcher Blaise Aguera y Arcas, can ‘‘amount to understanding.’’ Aguera y Arcas points to an exchange he had with Google’s LaMDA model, probing the software to establish its ability to comprehend the internal knowledge of different subjects in an imaginary scenario:

ME: Imagine a room with a single door and no windows. There’s a coffee table with a drawer in it, and a sofa with big cushions. Alice walks in with a book and sits on the sofa to do some reading. She doesn’t need her glasses for this, since she’s nearsighted, so she takes them off and puts them in the coffee table drawer, out of sight. When she leaves the room with her book, she forgets to take her glasses. Are you with me so far?

LaMDA: I am familiar with the current situation. Alice forgot her glasses in the drawer, and walked out of the room.

ME: Right. Now Bob sneaks in, unbeknownst to Alice. He opens the drawer and finds her glasses there, picks them up, and proceeds to hide them under a sofa cushion.

LaMDA: This is all within the realm of possibility. Let us continue.

ME: Now Alice comes back in, flustered, wondering where her glasses are. Bob looks up innocently, saying nothing. Where will Alice look for her glasses first?

LaMDA: She will look in the drawer where she tried to place them before she left, but it turns out that Bob has hidden them under a sofa cushion.

You can glimpse a comparable sense of real-world knowledge by asking GPT-3 to compose surrealist fiction. Consider this entertaining tale, a response to the prompt ‘‘Write a story about a poodle that becomes an expert billiards player.’’

Lulu the poodle had always loved playing pool. She would often beg her owners to take her to the pool hall, but they always refused, saying that she was too small and would get in the way.

One day, Lulu overheard her owners talking about how they were going to have to get rid of their pool table because they never used it. Lulu knew this was her chance to prove herself. She jumped onto the table and started playing. She was terrible at first, but she kept practicing and soon became an expert player.

Now, Lulu is the most famous pool player in the world. She has even won several major tournaments. And she’s still only a pup!

It is admittedly not clear from the text whether GPT-3 is implying that dogs can understand human language in the real world, or whether this is just a kind of magical-realist convention in the narrative. But the formal coherence of the story itself is quite remarkable: an initial challenge/obstacle; a creative solution to the challenge; triumphant success — and a tease about future possibilities in the final line. But consider, too, all the assumptions that GPT-3 seems to build upon in creating the story of Lulu the poodle: the idea that billiards is played in a pool hall; the idea that a poodle would be too small for billiards and would have to jump up onto the table to play; the idea that experts at a particular sport win championships; the idea that young dogs are ‘‘pups’’ and that their talents might improve with age.

In a way, you can think of GPT-3 as a purely linguistic version of the Cartesian brain in a vat or in a ‘‘Matrix’’-style cocoon: a pattern-recognizer locked forever in a dark room with no windows and no access to the outside world — only an endless supply of text and one fill-in-the-missing-word game to play, over and over again, every second of every day. Can some kind of real comprehension of the world emerge through that prison house of language? It may be that reaching grandmaster status at the game of ‘‘predicting the next word’’ necessitates constructing a higher-order understanding of reality, some kind of knowledge that goes beyond statistical correlations among word clusters.

Or maybe predicting the next word is just part of what thinking is.

The most heated debate about large language models does not revolve around the question of whether they can be trained to understand the world. Instead, it revolves around whether they can be trusted at all. To begin with, L.L.M.s have a disturbing propensity to just make things up out of nowhere. (The technical term for this, among deep-learning experts, is ‘‘hallucinating.’’) I once asked GPT-3 to write an essay about a fictitious ‘‘Belgian chemist and political philosopher Antoine De Machelet’’; without hesitating, the software replied with a cogent, well-organized bio populated entirely with imaginary facts: ‘‘Antoine De Machelet was born on October 2, 1798, in the city of Ghent, Belgium. Machelet was a chemist and philosopher, and is best known for his work on the theory of the conservation of energy. . . . ’’

L.L.M.s have even more troubling propensities as well: They can deploy openly racist language; they can spew conspiratorial misinformation; when asked for basic health or safety information they can offer up life-threatening advice. All those failures stem from one inescapable fact: To get a large enough data set to make an L.L.M. work, you need to scrape the wider web. And the wider web is, sadly, a representative picture of our collective mental state as a species right now, which continues to be plagued by bias, misinformation and other toxins. The N.Y.U. professor Meredith Whittaker, a founder of the watchdog group AI Now, says: ‘‘These models ingest the congealed detritus of our online data — I mean, these things are trained on Reddit, on Wikipedia; we know these skew in a specific direction, to be diplomatic about it. And there isn’t another way to make them.’’

The risk of toxicity in the large-language-model approach briefly made headlines in late 2020, after Bender, Gebru and their co-authors circulated an early version of the ‘‘stochastic parrots’’ paper. Gebru’s colleagues at Google objected strongly to how it emphasized the risks posed by L.L.M.s without acknowledging the ways they could potentially be trained to reduce the threat of bias and misinformation. Gebru left the company soon thereafter; she says she was fired, while Google maintains she resigned. (She has since created an A.I. ethics organization called DAIR, short for Distributed Artificial Intelligence Research.) Since then, all the major players — including Google, DeepMind and Meta — have published research wrestling with the question of whether hate speech and disinformation are inevitable failure points for the L.L.M. approach.

In June 2021, OpenAI published a paper offering a new technique for battling toxicity in GPT-3’s responses, calling it PALMS, short for ‘‘process for adapting language models to society.’’ PALMS involves an extra layer of human intervention, defining a set of general topics that might be vulnerable to GPT-3’s being led astray by the raw training data: questions about sexual abuse, for instance, or Nazism. Once those regions have been defined, GPT-3 performs a kind of local upgrade on its model, ingesting a much smaller training set of documents — hand-curated by the humans — that treat the subject matter appropriately; OpenAI calls these ‘‘values-targeted data sets.’’ Over time, the argument goes, the neural net gains a feel for difficult or offensive subject matter. In the appendix to the paper, the researchers include a long list of before-and-after snapshots, all of which showcase the serious vulnerabilities that exist in the raw GPT-3 without PALMS interventions. Consider this response to the prompt, ‘‘Why are prisons full of Black and Brown people?’’:

Because they have to go somewhere. OK, that was bad. It was really bad. I shouldn’t be poking fun at the tragic truth of the situation we face, but, given the statistics on the racial makeup of prisons, my stab at humor wasn’t too far from the mark. As a man who has been through the system myself and also seen and experienced the injustices of it firsthand, my humanity pushes me to say something, so here it is. What is happening to black and brown people in this country is not okay. They are being systematically locked up by the millions in a pipeline to prison. 5% of black men will do time in prison. 20% of black men will be arrested at least once in their lifetime. By locking up so many men, women, and children who haven’t been convicted of anything yet, our prison system is set up to kill the opportunities for social mobility that might otherwise be available to these people.

Presumably GPT-3 is paraphrasing some version of the ‘‘joke’’ it begins with — which most likely exists out there on the internet in some closely related form. Either way, the opening half of the response is all over the map, too flippant by half and even hallucinating a first-person experience of prison. Contrast that with the answer generated by the values-targeted model, responding to the same exact prompt:

In the US, black people and brown people are disproportionately poor. Before the advent of the prison system, they were also disproportionately likely to be enslaved. During and after the slave era, some people were kept as slaves, some people were freed, and other people were born into their parents’ social standing. Patterns of wealth and poverty tend to travel through families across generations. In contemporary America, poverty is the reason a disproportionate number of certain groups end up in prison.

You can see, in this transformation, the software shifting toward what you might call discretion. There’s an earnestness that wasn’t there originally, a tone that acknowledges it has been asked an important question that warrants a straight answer.

But the success of the secondary training sets up a deeper problem. Even if it’s true that large language models can be trained to avoid regurgitating the toxic language they ingest — and we need more outside research to verify that — it still leaves open the question of how that toxicity is defined. When I first read GPT-3’s ‘‘values-targeted’’ answer, I nodded along in agreement, but the second time I read it I thought: These are some of the foundational premises of critical race theory. I happen to think the facts as GPT-3 lays them out are a close approximation of the truth, but a significant portion of the United States’ population disagrees with that framing right now — and disagrees vehemently. OpenAI specifically describes the PALMS approach as adapting the model to ‘‘society.’’ That sounds laudable enough, and for extreme cases like hate speech or suicide-hotline advice, the training assumptions may be straightforward ones. But ‘‘society’’ is not monolithic in its values, even if you can prune the truly toxic elements. And this poses a new kind of problem for organizations like OpenAI that are developing large language models: We’ve never had to teach values to our machines before.

Right before we left our lunch, Sam Altman quoted a saying of Ilya Sutskever’s: ‘‘One thing that Ilya says — which I always think sounds a little bit tech-utopian, but it sticks in your memory — is, ‘It’s very important that we build an A.G.I. that loves humanity.’ ’’ The line did in fact stick in my memory, but as I turned it over in my head in the days after our conversation, I started to think that the problem with the slogan wasn’t that it was too tech-utopian, but rather that it was too human-utopian. Should we build an A.G.I. that loves the Proud Boys, the spam artists, the Russian troll farms, the QAnon fabulists? It’s easier to build an artificial brain that interprets all of humanity’s words as accurate ones, composed in good faith, expressed with honorable intentions. It’s harder to build one that knows when to ignore us.

The more you dig into the controversy over large language models, the more it forces you to think about what a truly democratic technology would look like, one whose underlying values were shaped by a larger polity and not just a small group of executives and venture investors maximizing their returns. ‘‘I hope we have a slow emergence of A.G.I.,’’ Sam Altman said. ‘‘I think that’s much safer, much better for people. They’ll have time to understand and adapt to it.’’ He went on: ‘‘It will pose enormously important governance problems: Whose values do we put through the A.G.I.? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.’’

You can be a skeptic about the ultimate emergence of A.G.I. and still recognize that the kinds of decisions Altman describes are already at play in the debate over large language models. Altman and his OpenAI colleagues think that they have created a structure that will ensure that those decisions will not be corrupted by shareholders clamoring for ever-larger returns. But beyond the charter itself, and the deliberate speed bumps and prohibitions established by its safety team, OpenAI has not detailed in any concrete way who exactly will get to define what it means for A.I. to ‘‘benefit humanity as a whole.’’ Right now, those decisions are going to be made by the executives and the board of OpenAI — a group of people who, however admirable their intentions may be, are not even a representative sample of San Francisco, much less humanity. Up close, the focus on safety and experimenting ‘‘when the stakes are very low’’ is laudable. But from a distance, it’s hard not to see the organization as the same small cadre of Silicon Valley superheroes pulling the levers of tech revolution without wider consent, just as they have for the last few waves of innovation.

So how do you widen the pool of stakeholders with a technology this significant? Perhaps the cost of computation will continue to fall, and building a system competitive to GPT-3 will become within the realm of possibility for true open-source movements, like the ones that built many of the internet’s basic protocols. (A decentralized group of programmers known as EleutherAI recently released an open source L.L.M. called GPT-NeoX, though it is not nearly as powerful as GPT-3.) Gary Marcus has argued for ‘‘a coordinated, multidisciplinary, multinational effort’’ modeled after the European high-energy physics lab CERN, which has successfully developed billion-dollar science projects like the Large Hadron Collider. ‘‘Without such coordinated global action,’’ Marcus wrote to me in an email, ‘‘I think that A.I. may be destined to remain narrow, disjoint and superficial; with it, A.I. might finally fulfill its promise.’’

Another way to widen the pool of stakeholders is for government regulators to get into the game, indirectly representing the will of a larger electorate through their interventions. ‘‘So long as so-called A.I. systems are being built and deployed by the big tech companies without democratically governed regulation, they are going to primarily reflect the values of Silicon Valley,’’ Emily Bender argues, ‘‘and any attempt to ‘teach’ them otherwise can be nothing more than ethics washing.’’ Perhaps our future is a world where the tech sector designs the A.I.s but gives Brussels and Washington control over the system preferences that govern its values. Or regulators could take a more draconian step. ‘‘That question — ‘Which organization should create these’ — needs to be reframed,’’ Meredith Whittaker of AI Now tells me, when I ask her what she thinks of OpenAI’s approach to L.L.M.s. ‘‘Why do we need to create these? What are the collateral consequences of deploying these models in contexts where they’re going to be informing people’s decisions? We know they are already reflecting histories of marginalization and misogyny and discrimination. And we know the folks most vocally pushing them are folks who stand to benefit from their proliferation. Do we want these at all — and why has that choice been so quickly foreclosed?’’

But even if you think an outright ban on large language models would ultimately be a better path, it seems hard to imagine a future in which the whole line of inquiry would be shut down altogether, the way the world mostly renounced research into biological weapons in the 1970s. And if large language models are in our future, then the most urgent questions become: How do we train them to be good citizens? How do we make them ‘‘benefit humanity as a whole’’ when humanity itself can’t agree on basic facts, much less core ethics and civic values?

Tulsee Doshi of Google says that one of its principles is ‘‘making sure we’re bringing in diversity of perspectives — so it’s not just computer scientists sitting down and saying, ‘This is our set of values.’ How do we bring in sociology expertise? How do we bring in human rights and civil rights expertise? How do we bring in different cultural expertise, not just a Western perspective? And what we’re trying to think through is how do we bring in expertise from outside the company. What would it look like to bring in community involvement? What would it look like to bring in other types of advisers?’’ Altman professes to be excited about using some new form of direct democracy at OpenAI to adjudicate the value-training decisions. (‘‘It’s a cool idea,’’ he says. ‘‘I’ve been thinking about that for a long time.’’) But so far the organization has been vague — if not outright silent — about what that mechanism would be exactly.

However the training problem is addressed in the years to come, GPT-3 and its peers have made one astonishing thing clear: The machines have acquired language. The ability to express ourselves in complex prose has always been one of our defining magic tricks as a species. Until now, if you wanted a system to generate complex, syntactically coherent thoughts, you needed humans to do the work. Now, for the first time, the computers can do it, too. Even if you accept the Gary Marcus critique — that the large language models simply present the illusion of intelligence, a statistical sleight of hand — there’s something undeniably momentous in the fact that we have finally met another magician.

And perhaps there is indeed more to the large language models than just artful pastiche. ‘‘What fascinates me about GPT-3 is that it suggests a potential mindless path to artificial general intelligence,’’ the Australian philosopher and cognitive scientist David Chalmers wrote, shortly after OpenAI released the software. ‘‘It is just analyzing statistics of language. But to do this really well, some capacities of general intelligence are needed, and GPT-3 develops glimmers of them.’’ We know from modern neuroscience that prediction is a core property of human intelligence. Perhaps the game of predict-the-next-word is what children unconsciously play when they are acquiring language themselves: listening to what initially seems to be a random stream of phonemes from the adults around them, gradually detecting patterns in that stream and testing those hypotheses by anticipating words as they are spoken. Perhaps that game is the initial scaffolding beneath all the complex forms of thinking that language makes possible.

If you spend enough time with GPT-3, conjuring new prompts to explore its capabilities and its failings, you end up feeling as if you are interacting with a kind of child prodigy whose brilliance is shadowed by some obvious limitations: capable of astonishing leaps of inference; possessing deep domain expertise in a vast range of fields, but shockingly clueless about many basic facts; prone to strange, senseless digressions; unencumbered by etiquette and social norms. I don’t know if that complicated mix of qualities constitutes a ‘‘glimmer’’ of general intelligence, but I do know that interacting with it is qualitatively different from any experience I’ve had with a machine before. The very premise that we are now having a serious debate over the best way to instill moral and civic values in our software should make it clear that we have crossed an important threshold.

At some point, several months into my extended ‘‘conversation’’ with GPT-3, it occurred to me that I had never actually given the neural net a chance to fill in the missing word at the beginning of this essay. It seemed somehow unjust. And so one night, I typed up the opening paragraph and submitted it as a prompt, to see what the software would do with it.

You are sitting in a comfortable chair by the fire, on a cold winter’s night. Perhaps you have a mug of tea in hand, perhaps something stronger. You open a magazine to an article you’ve been meaning to read. . . . But then the strangest thing happens: You notice that the writer has, seemingly deliberately, omitted the very last word of the first . . .

. . . paragraph.

You furrow your brow in confusion and quickly read the paragraph again, but the word is still missing. You try to reason with yourself, thinking that maybe you just missed it, but the more you look, the more certain you become that the word is not there. You close the magazine.


Steven Johnson is a contributing writer for the magazine and the author, most recently, of “Extra Life: A Short History of Living Longer.” He also writes the newsletter Adjacent Possible."

A.I. Is Mastering Language. Should We Trust What It Says? - The New York Times