Contact Me By Email

Saturday, May 06, 2023

Roborock S8+ vs S8 Pro Ultra - Which robot vacuum should you choose?

All Aventon Models Reviewed: Which One is Right for You?

When To See May's Full Flower Moon Over Georgia | Across Georgia, GA Patch

When To See May's Full Flower Moon Over Georgia

"See what time the full flower moon's peak illumination will be Friday.

May’s full flower moon reaches peak illumination at 1:36 a.m. Eastern Time Friday.
May’s full flower moon reaches peak illumination at 1:36 a.m. Eastern Time Friday. (Shutterstock)

GEORGIA — May’s full flower moon will be shining brightly over the next few days, but that doesn’t mean Georgia shooting star chasers should skip a chance to see the predicted Eta Aquariids meteor shower outburst.

NASA meteor expert Bill Cooke says the Eta Aquariids are so bright and fast that they’re a good match against the full flower moon, according to Space.com. 

The moon reaches peak illumination at 1:36 a.m. Eastern Time Friday, but will be below the horizon then, so make sure to take another peek after sunset through the weekend.

The National Weather Service calls for cloudy skies with a chance for thunderstorms through the weekend.

One thing we won’t see over the United States is a full flower lunar eclipse — which happens when Earth passes between the sun and the moon and casts part of its shadow on the moon. It will be visible in Africa, Asia, Australia and large parts of Europe, but not in the United States.

May’s full moon is called the flower moon because this is the time of year when flowers pop out of the ground and start blooming. Among Native Americans, the full moons of each month were named to correspond with seasons and activities taking place at the time.

Other names for the May full moon include the budding moon, leaf budding moon, planting moon, egg laying moon, frog moon and moon of the shedding ponies, according to The Old Farmer’s Almanac."


When To See May's Full Flower Moon Over Georgia | Across Georgia, GA Patch

Friday, May 05, 2023

Review: Roborock S7: robot vacuum and mop in one

The Best Robot Vacuum/Mop You Can Buy in 2023 (Roborock S8)

Why Roger Lee Started Layoffs.fyi - The New York Times

The Bearer of Bad News

Roger Lee has cataloged hundreds of thousands of tech job cuts on his site Layoffs.fyi. He still believes the industry will “100 percent” bounce back.

A man in a light blue shirt standing against a window.
Roger Lee, 36, is the creator of Layoffs.fyi, a website that tracks tech layoffs. The site has cataloged nearly 450,000 cuts in a public spreadsheet.Jim Wilson/The New York Times

A short list of moments in the day when Roger Lee is thinking about layoffs: while waiting for someone to show up to a Zoom call. After his two young children have gone to sleep. At 5 a.m., before his first meeting of the day.

Since starting Layoffs.fyi as a side project at the start of the pandemic, he has cataloged nearly 450,000 tech layoffs in a public spreadsheet, updating the list whenever he can find a few minutes.

Though Mr. Lee, 36, reads bad news constantly, he remains a stalwart optimist about tech. He recognizes the pain that layoffs cause, but he also believes the industry will “100 percent” bounce back.

And Mr. Lee believes that talking openly about layoffs in the industry he loves is healthy. “The reduced stigma has the potential to be really positive,” he said. If people are speaking openly about layoffs, he reasons, workers can find new jobs efficiently.

Layoffs.fyi is both a symptom and a cause of a cultural shift toward transparency about layoffs in tech. Though Mr. Lee would never claim that his site is the only force driving this trend, he does think that it has helped workers put their own layoffs in context — and has helped the public understand the downturn. After tech companies fought to scoop up top talent during the pandemic, rapidly rising interest rates pushed companies to start making drastic cuts this year and last.

“Having this website engenders more transparency,” said Nick Bloom, a professor of economics at Stanford. As so many tech workers are laid off in Silicon Valley, he added, “the stigma has almost totally evaporated.”

Over the past three years, Mr. Lee’s site has become a meaningful resource. Recruiters scour the listings for talent after big layoffs, and workers post their information when they lose their jobs.

Tim Sackett, who runs an I.T. and engineering staffing firm, said that looking at Layoffs.fyi saves him “a tremendous amount of time,” because it points him to workers who are actively looking for jobs.

Media organizations, including The New York Times, often cite Layoffs.fyi as a source for the latest tech layoff figures. While the Bureau of Labor Statistics shares data on layoffs in various sectors, it does not track them at venture-capital-backed start-ups and tech companies in real-time. So Mr. Lee helps fill the gap.

“If you have fantastic government data on the state of tech,” Mr. Bloom said, you don’t need a site like Mr. Lee’s. “But if that data does not exist, Layoffs.fyi becomes invaluable.”

“There was just a complete data vacuum,” at the start of the pandemic, he added. “It’s impressive he was so early.”

Mr. Lee didn’t mean for any of this to happen. “It does feel weird,” he said, “to be the bearer of bad news.” But he has kept going. He said his site gets at least a million views per month — and more than that in busy periods with a lot of layoffs.

Mr. Lee had been following layoffs in an informal way since 2015, as he looked for talent to hire at his previous start-up. When the pandemic hit, he was on parental leave. He thought maybe others would find his process useful. “My original motivation was to be helpful,” he said. At the time, he added, he was struck by the volume of layoffs occurring. “I thought, wow, seven in one week, that’s so many.”

Soon, the layoffs — and the demands of updating the tracker — accelerated. By April 2020, he said, “I was spending all of my baby’s nap times updating the site.”

Mr. Lee follows an informal set of self-imposed rules about what companies to post about, and when. He makes a judgment call about whether a company counts as “tech”— Buzzfeed: yes; Disney: no;— and sometimes sits on information until it has been reported in the news media.

“I don’t want to be a place where employees find out,” Mr. Lee said. “I get these ‘scoops’ but don’t try to break news.”

“I don’t pretend these are hard and fast rules,” he added. But, so far, he says they allow him to provide information that is useful to people without accidentally causing panic.

Mr. Lee runs Layoffs.fyi as a hobby, and he spends money on it. In addition to his time, he estimates that he spends about $200 each month on the cost of its servers. He said he has declined to run ads, though he has been approached.

But the site’s popularity did help give him the idea for a new company: Comprehensive.io, which tracks compensation in tech job postings, on a public list.

He considers Comprehensive.io to be the inverse of Layoffs.fyi — it focuses on opportunities, not cuts. “I likely wouldn’t have come up with the latter without the former,” he said.

“Part of what gave him the intuition that people would find the pay transparency data interesting is how interesting people found the layoffs data,” said Teddy Sherrill, Mr. Lee’s longtime friend and a founder of Comprehensive.io, which now has a staff of about a dozen people.

Though Mr. Lee is not yet 40, he takes a long view of tech. It helps that he has spent about half his life building websites.

When advertisers first reached out to Mr. Lee asking if they could run ads on a site he built as an adolescent, he told them he didn’t want to talk on the phone.

“I feared if I talked to them, they would realize I was a 13-year-old,” he said.

By 2002, Mr. Lee and his childhood friend ran some of the most popular sites for teens, according to Nielsen data published at the time.

One of the companies he co-founded was SubProfile, a social media service built on AOL’s instant messenger platform. By his junior year of high school, he said the site was generating “six figures in revenue” and getting at least seven million unique visitors per month. (Nothing he has done since has exceeded that traffic, he said.)

He received his first paycheck from an advertiser at his parents’ home in the suburbs of New York.

“It was definitely a surreal experience,” Mr. Lee said of his time as a teen entrepreneur. “That’s when I first fell in love with the internet.”

Mr. Lee sold another company, a study guide site, while he was an undergraduate at Harvard. 

After graduation, he co-founded an internet ad sales start-up called PaperG in New Haven, Conn. Mr. Lee was at the company, which has since changed its named to Thunder and was sold to Walmart, for about seven years.

Krystal Benitez was hired onto Mr. Lee’s operations team at PaperG in 2012. By that point the company, and Mr. Lee, had moved to San Francisco. Ms. Benitez said that Mr. Lee often gave informal financial advice to the staff, including several lunchtime lectures about investing and retirement planning. Personal finance is “a very near and dear topic to his heart,” she said. Indeed, Mr. Lee soon founded a 401(k) start-up called Human Interest, which was valued at a billion dollars in 2021.

“There’s maybe no better way to have impact on people’s financial lives than through employment and how they’re paid,” Mr. Lee said. He added that his interest in personal finance has animated his career, from Human Interest to Layoffs.fyi to his work now on Comprehensive.io.

Mr. Lee’s early enchantment — and early success — with the web continues to power his optimism about the industry, and his view that lives can be improved through technology.

“The economy has gone through two boom-and-bust cycles that I’ve been a part of,” he said. “Both times, tech came back stronger than ever.”

“I know it has its downsides,” he said of the internet. But, he added, “The upsides are so, so positive.” He believes that human traits, including our best impulses, are magnified online.

“You might think the person behind Layoffs.fyi would be a cynical character,” said Tyler Bosmeny, a college classmate of Mr. Lee’s who was the first employee at PaperG and has since invested in his companies. But, he said, Mr. Lee is “the most optimistic person I know.” He added that Mr. Lee often sees thorny issues as data problems that can be solved with tech.

“What’s most hysterical,” Mr. Bosmeny added, is that “this was his idea of taking a break.”

Why Roger Lee Started Layoffs.fyi - The New York Times

Wednesday, May 03, 2023

Opinion | Why 'right to repair' could be the next big political movement - The Washington Post

Opinion Why ‘right to repair’ could be the next big political movement

Colorado Gov. Jared Polis at a bill signing on April 25. The legislation forces manufacturers to provide the necessary manuals, tools, parts and even software to farmers so they can fix their own machines. (David Zalubowski/AP) 

"There aren’t many issues that unite Democratic, Republican and independent voters, offer a ready-made villain in greedy corporations, and tick off people from all different socioeconomic groups. Which is why the “right-to-repair” movement could gain real momentum, and why any politician looking to demonstrate real populist bona fides — rather than the phony kind — should jump on it.

Colorado Gov. Jared Polis just signed the country’s first right-to-repair law aimed at agricultural equipment. It prevents manufacturers such as John Deere from withholding manuals and other information that would enable independent repair professionals, or farmers themselves, to fix tractors, combines and other equipment.

This year, bills have been introduced in 28 states to prevent companies from restricting repairs on cars, electronics, appliances and all kinds of other products. If you’ve ever wondered why you can’t replace the battery in your iPhone, this issue is about you. Sooner or later, it will be about almost everybody.

We now live in what University of Michigan law professor Aaron Perzanowski calls “the tethered economy.” More and more, the things we buy come with strings attached to the manufacturer, effectively requiring us to pay fees to use — or repair — our own stuff.

“What we’re witnessing is fundamentally a shift from selling products to providing services,” Perzanowski, the author of “The Right to Repair: Reclaiming the Things We Own,” told me. “What’s confusing for consumers about that is that the services still look like products.”

We put them on our counters, carry them in our pockets or park them in our driveways, but the companies find ways to keep extracting our money. What makes this possible is that software is built into every piece of technology — not just obvious things such as computers and cellphones, but also appliances, cars and much more.

That software, says Nathan Proctor of the Public Interest Research Group, “opens up a whole new range of opportunities to control the product for the benefit of the manufacturer.”

That’s what farmers have found: In many cases, they can’t repair their tractors because the software won’t let them without the company’s permission. This “software lock” has been particularly frustrating to farmers, who pride themselves on their self-reliance and problem-solving. They often wait weeks or months for the dealer to repair a tractor when farmers could do it themselves or have a local mechanic fix it. When the repair is finally made, it usually costs more.

A backlash is building. In Massachusetts, voters overwhelmingly passed a 2020 initiative giving consumers and repair shops access to data sent wirelessly from cars back to manufacturers. Last week, the Minnesota legislature took a big step toward passing large bills with right-to-repair language. This fall, Maine voters will have an automotive right-to-repair question on their ballots. And in March, a bipartisan group of attorneys general wrote a letter urging Congress to pass national right-to-repair legislation.

Even though much of the action has been in blue states, it hasn’t been all that partisan. “It’s bipartisan everywhere,” Proctor told me. “A lot of the sponsors are Republicans, there’s a lot of grass-roots Republican support.”

The new Colorado law could be important for two reasons, says Perzanowski. First, it provides relief for farmers and independent repair shops. Second, “it’s going to demonstrate that if you pass one of these laws, the sky does not fall, markets do not collapse, John Deere’s not going to pull out of Colorado.”

Only a few lawmakers will echo the manufacturers’ line, which is essentially that they can build their products however they like. In fact, this issue is perfect for leaders who — unlike fake right-wing populists who oppose corporations only when they’re “woke” — are attempting to create a real pro-worker, pro-small business, anti-corporate politics.

Companies are feeling the pressure, with action at the state level and the Federal Trade Commission looking closely at whether these manufacturer restrictions constitute illegal anti-competitive behavior, following an executive order President Biden signed in 2021, which cited “restrictions imposed by powerful manufacturers that prevent farmers from repairing their own equipment.”

Unlike most problems, the right-to-repair issue appeals to people of almost any ideology, from those who care deeply about individual property rights to those who want to rein in corporate abuse. The argument is easy to understand: Manufacturers shouldn’t be able to tell you what you can and can’t do with your own stuff.

“The more times we have that conversation, the easier it gets,” Proctor says. Pollafter poll shows broad support for both the issue and legislative responses to it.

Manufacturers will no doubt adapt; they’re experimenting constantly with how far they can push the limits of consumer tolerance. They might get away with telling consumers they need to pay $90 a month to unlock a bit more horsepower on their cars, but what if you had to pay an extra subscription fee for your air bags to work?

Given all the conditions and restrictions that already come with so many of the things we buy, it no longer sounds like dystopian science fiction. But it doesn’t have to be our future. Politicians in both parties should be shoving each other out of the way to lead the charge."

Opinion | Why 'right to repair' could be the next big political movement - The Washington Post

Tuesday, May 02, 2023

How Do We Ensure an A.I. Future That Allows for Human Thriving? - The New York Times

How Do We Ensure an A.I. Future That Allows for Human Thriving?

















"When OpenAI released its artificial-intelligence chatbot, ChatGPT, to the public at the end of last year, it unleashed a wave of excitement, fear, curiosity and debate that has only grown as rival competitors have accelerated their efforts and members of the public have tested out new A.I.-powered technology. Gary Marcus, an emeritus professor of psychology and neural science at New York University and an A.I. entrepreneur himself, has been one of the most prominent — and critical — voices in these discussions. More specifically, Marcus, a prolific author and writer of the Substack “The Road to A.I. We Can Trust,” as well as the host of a new podcast, “Humans vs. Machines,” has positioned himself as a gadfly to A.I. boosters. At a recent TED conference, he even called for the establishment of an international institution to help govern A.I.’s development and use. “I’m not one of these long-term riskers who think the entire planet is going to be taken over by robots,” says Marcus, who is 53. “But I am worried about what bad actors can do with these things, because there is no control over them. We’re not really grappling with what that means or what the scale could be.”

It seems as if people are easily able to articulate a whole host of serious social, political and cultural problems that are likely to arise from the But it seems much less easy for people to articulate specific potential benefits on the same scale. Should that be a huge red flag? The question is: Do the benefits outweigh the costs? The intellectually honest answer is that we don’t know. Some of us would like to slow this down because we are seeing more costs every day, but I don’t think that means that there are no benefits. We know it’s useful for computer programmers. A lot of this discussion around the so-called  is a fear that if we don’t build GPT-5, and China builds it first, somehow something magical’s going to happen; 5 is going to become an artificial general intelligence that can do anything. We may someday have a technology that revolutionizes science and technology, but I don’t think GPT-5 is the ticket for that. GPT-4 is pitched as this universal problem solver and can’t even play a decent game of chess! To scale that up in your mind to think that GPT-5 is going to go from “can’t even play chess” to “if China gets it first, the United States is going to explode” — this is fantasyland. But yeah, I’m sure GPT-5 will have some nice use cases. The biggest use case is still writing dodgy prose for search engines.

Do you think the public has been too credulous about ChatGPT? It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine.  We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem.

So have or been irresponsible in not speaking more clearly about the actual capabilities — or lack thereof — of their companies’ technologies? Sam has walked both sides of that fence — at times, I think, inviting the inference that these things are artificial general intelligence. The most egregious example of that in my mind is when He posted pictures and a Tweet saying, “A.G.I. is gonna be wild.” That is inviting the inference that these things are artificial general intelligence, and they are not! He subsequently backed down from that. Also, around that time, he attacked me. He said, “Give me the confidence of a mediocre deep-learning skeptic.” It was clearly an attack on me. But by December, he started to, I think, realize that he was overselling the stuff. He had a Tweet saying these things have trouble with truth. That’s what I was telling you back when you were making fun of me! So he has played both sides of it and continues to play both sides. They put out this statement about dealing with A.G.I. risk, inviting the inference that what they have has something to do with artificial general intelligence. I think it’s misleading. And Nadella is certainly not going around being particularly clear about the gap between people’s expectations and the reality of the systems.

And when Sam Altman said that ChatGPT needs to be out there being used by the public so that we can learn what the technology doesn’t do well and how it can be misused while “the stakes are low” — to you that argument didn’t hold water? Are the stakes still low if 100 million people have it and bad actors can download their own new trained models from the dark web? We see a real risk here. Every day on Twitter I get people like: “What’s the risk? We have roads. People die on roads.” But we can’t act like the stakes are low. I mean, in other domains, people have said, “Yeah, this is scary, and we should think more about it.” Germ-line genome editing is something that people have paused on from time to time and said, “Let’s try to understand the risks.” There’s no logical requirement that we simply march forward if something is risky. There’s a lot of money involved, and that’s pushing people in a particular direction, but I don’t think we should be fatalistic and say, “Let’s let it rip and see what happens.”

Gary Marcus at a TED conference last month. Gilberto Tadday/TED

What do you think the 2024 presidential election looks like in a world of A.I.-generated misinformation and deepfakes? A [expletive] show. A train wreck. You probably saw the Trump arrest photos. And The Guardian had a piece about what their policy is going to be as people make fake Guardian articles, because they know this is going to happen. People are going to make fake New York Times articles, fake CBS News videos. We had already seen hints of that, but the tools have gotten better. So we’re going to see a lot more of it — also because the cost of misinformation is going to zero.

You can imagine candidates’ dismissing factual reporting that is troublesome to them as being A.I. fakery. Yeah, if we don’t do something, the default is that by the time the election comes around in 2024, nobody’s going to believe anything, and anything they don’t want to believe they’re going to reject as being A.I.-generated. Aand the problems we have around civil discourse and polarization are just going to get worse.

So what do we do? We’re going to need watermarking for video. For text, it’s going to be really hard; it’s hard to make machines that can detect the difference between something generated by a person and something generated by a machine, but we should try to watermark as best we can and track provenance. That’s one. No. 2 is we’re going to have to have laws that are going to make a lot of people uncomfortable because they sound like they’re in conflict with our First Amendment — and maybe they are. But I think we’re going to have to penalize people for mass-scale harmful misinformation. I don’t think we should go after an individual who posts a silly story on Facebook that wasn’t true. But if you have troll farms and they put out a hundred million fake pieces of news in one day about vaccines — I think that should be penalizable. We don’t really have laws around that, and we need to in the way that we developed laws around spam and telemarketing. We don’t have rules on a single call, but we have rules on telemarketing at scale. We need rules on distributing misinformation at scale.

You have A.I. companies, right? I sold it to Uber. Then the second one is called RobustAI. It’s a robotics company. I co-founded it, but I’m not there any longer.

OK, so knowing all we know about the dangers of A.I., what for you is driving the “why” of developing it? Why do it at all? Rather than lobby to shut it down?

Yeah, because the potential harms feel so profound, and all the positive applications I ever hear about basically have to do with increased efficiency. Efficiency, to me, isn’t higher on the list of things to pursue than human flourishing. So what is the “why” for you? Since I was 8 years old, I’ve been interested in questions about how the mind works and how computers work. From the pure, academic intellectual perspective, there are few questions in the world more interesting than: How does the human child manage to take in input, understand how the world works, understand a language, when it’s so hard for people who’ve spent billions of dollars working on this problem to build a machine that does the same? That’s one side of it. The other side is, I do think that artificial general intelligence has enormous upside. Imagine a human scientist but a lot faster — solving problems in molecular biology, material science, neuroscience, actually figuring out how the brain works. A.I. could help us with that. There are a lot of applications for a system that could do scientific, causal reasoning at scale, that might actually make  I don’t think, however, that the technology we have right now is very good for that — systems that can’t even reliably do math problems. Those kinds of systems are not going to reinvent material science and save the climate. But I feel that we are moving into a regime where, exactly, the biggest benefit is efficiency: I don’t have to type as much; I can be more productive. These tools might give us tremendous productivity benefits but also destroy the fabric of society. If that’s the case, that’s not worth it. I feel that the last few months have been a wake-up call about how irresponsible the companies that own this stuff can be. They released it, and it was so bad that they took it down after 24 hours. I thought, oh, Microsoft has learned its lesson. Now Microsoft is racing, and Nadella is saying he wants to That’s not how we should be thinking about a technology that could radically alter the world.

Presumably an international body governing A.I. would help guide that thinking? What we need is something global, neutral, nonprofit, with governments and companies all part of it. We need to have coordinated efforts around building rules. Like, what happens when you have chatbots that lie a lot? Is that allowed? Who’s responsible? If misinformation spreads broadly, what are we going to do about that? Who’s going to bear the costs? Do we want the companies to put money into building tools to detect the misinformation that they’re causing? What happens if these systems perpetuate bias and keep underrepresented people from getting jobs? It’s not even in the interest of the tech companies to have different policies everywhere. It is in their interest to have a coordinated and global response.

Maybe I’m overly skeptical, but look at something like the Paris climate accord: The science is clear, we know the risks, and yet countries are falling well short of meeting their goals. So why would global action on A.I. be feasible? I’m not sure this is going to fall neatly on traditional political lines. YouGov ran a poll — it’s not the most scientific poll — but 69 percent of people supported a pause in A.I. development. That makes it a bipartisan issue. I think, in the U.S., there’s a real chance to do something in a bipartisan way. And in Europe, people are like, “This So I’m more optimistic about this than I am about a lot of things. The exact nature is totally up for grabs, but there’s a strong desire to do something. It’s like many other domains: You have to build infrastructure in order to make sure things are OK, like building codes and the UL standards for electrical wiring and appliances. They may not like the code, but people live with the code. We need to build a code here of what’s acceptable and who’s responsible. I’m moderately optimistic. On the other hand, I’m very pessimistic that if we don’t, we’re in trouble.

There’s a belief that A.I. development should be paused until we can know whether it presents undue risks. But given how new and dynamic A.I. is, how could we even know what all the undue risks are? You don’t. It’s part of the problem. I actually wrote my own pause letter, so to speak, with We called for a pause not on research but on deployment. The notion was that if you’re going to deploy it on a wide scale, let’s say 100 million people, then you should have to build a safety case for it, the way you do in medicine. OpenAI kind of did a version of this with their system card. They said, “Here are 12 risks.” But they didn’t actually make a case that the benefits of people typing faster and having fun with chatbots outweighed the risks! Sam Altman has acknowledged that there’s a risk of misinformation, massive cybercrime. OK, that’s nice that you acknowledge it. Now the next step ought to be, before you have widespread release, let’s have somebody decide: Do those risks outweigh the benefits? Or how are we even going to decide that? And at the moment, the power to release something is entirely with the companies. It has to change.


Opening illustration: Source photograph by Athena Vouloumanos

This interview has been edited and condensed for clarity from two conversations."


How Do We Ensure an A.I. Future That Allows for Human Thriving? - The New York Times

After 10,000 photos with the Sony A7RV--- I'M SWITCHING!

Monday, May 01, 2023

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Geoffrey Hinton, wearing a dark sweater.
Dr. Geoffrey Hinton is leaving Google so that he can freely share his concern that artificial intelligence could cause the world serious harm.Chloe Ellingson for The New York Times

By Cade Metz

Cade Metz reported this story in Toronto.

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

A New Generation of Chatbots

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open lettercalling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.” 

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Ilya Sutskever, in a blue T-shirt and gray pants, sitting on a red couch.
Ilya Sutskever, OpenAi’s chief scientist, worked with Dr. Hinton on his research in Toronto.Jim Wilson/The New York Times

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said. 

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore."

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times