Contact Me By Email

Monday, May 22, 2023

Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules - The New York Times

Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules

"The Facebook owner said it would appeal an order to stop sending data about European Union users to the United States.

A large geometric building with glass windows, with the lights on inside visible from the street. There is a staircase leading into the entrance.
Meta’s headquarters in Dublin. The $1.3 billion fine against the company is a record under the General Data Protection Regulation law. Paulo Nunes dos Santos/Bloomberg

By Adam Satariano

Adam Satariano, a technology correspondent based in London, covers digital policy.

Meta on Monday was fined a record 1.2 billion euros ($1.3 billion) and ordered to stop transferring data collected from Facebook users in Europe to the United States, in a major ruling against the social media company for violating European Union data protection rules.

The penalty, announced by Ireland’s Data Protection Commission, is potentially one of the most consequential in the five years since the European Union enacted the landmark data privacy law known as the General Data Protection Regulation. Regulators said the company failed to comply with a 2020 decision by the E.U.’s highest court that data shipped across the Atlantic was not sufficiently protected from American spy agencies.

The ruling announced on Monday applies only to Facebook and not Instagram and WhatsApp, which Meta also owns. Meta said it would appeal the decision and that there would be no immediate disruption to Facebook’s service in the Europe Union.

Several steps remain before the company must cordon off the data of Facebook users in Europe — information that could include photos, friend connections, direct messages and data collected for targeting advertising. The ruling comes with a grace period of at least five months for Meta to comply. And the company’s appeal will set up a potentially lengthy legal process.

European Union and American officials are negotiating a new data-sharing pact that would provide new legal protections for Meta to continue moving information about users between the United States and Europe. A preliminary deal was announced last year.

Yet the E.U. decision shows how government policies are upending the borderless way that data has traditionally moved. As a result of data-protection rules, national security laws and other regulations, companies are increasingly being pushed to store data within the country where it is collected, rather than allowing it to move freely to data centers around the world.

The case against Meta stems from U.S. policies that give intelligence agencies the ability to intercept communications from abroad, including digital correspondence. In 2020, an Austrian privacy activist, Max Schrems, won a lawsuit to invalidate a U.S.-E.U. pact, known as Privacy Shield, that had allowed Facebook and other companies to move data between the two regions. The European Court of Justice said the risk of U.S. snooping violated the fundamental rights of European users.

“Unless U.S. surveillance laws get fixed, Meta will have to fundamentally restructure its systems,” Mr. Schrems said in a statement on Monday. The solution, he said, was likely a ”federated social network” in which most personal data would stay in the E.U. except for “necessary” transfers like when a European sends a direct message to somebody in the United States.

On Monday, Meta said it was being unfairly singled out for data-sharing practices used by thousands of companies.

“Without the ability to transfer data across borders, the internet risks being carved up into national and regional silos, restricting the global economy and leaving citizens in different countries unable to access many of the shared services we have come to rely on,” Nick Clegg, Meta’s president of global affairs, and Jennifer Newstead, the chief legal officer, said in a statement.

The ruling, which is a record fine under the G.D.P.R., had been expected. Last month, Susan Li, Meta’s chief financial officer, told investors that about 10 percent of its worldwide ad revenue came from ads delivered to Facebook users in E.U. countries. In 2022, Meta had revenue of nearly $117 billion.

Meta and other companies are counting on a new data agreement between the United States and the European Union to replace the one invalidated by European courts in 2020. Last year, President Biden and Ursula von der Leyen, the president of the European Union, announced the outlines of a deal in Brussels, but the details are still being negotiated.

Meta faces the prospect of having to delete vast amounts of data about Facebook users in the European Union, said Johnny Ryan, senior fellow at the Irish Council for Civil Liberties. That would present technical difficulties given the interconnected nature of internet companies.

“It is hard to imagine how it can comply with this order,” said Mr. Ryan, who has pushed for stronger data-protection policies.

The decision against Meta comes almost exactly on the five-year anniversary of G.D.P.R. Initially held up as a model data privacy law, many civil society groups and privacy activists have said it has not fulfilled its promise because of lack of enforcement.

Much of the criticism has focused on a provision that requires regulators in the country where a company has its European Union headquarters to enforce the far-reaching privacy law. Ireland, home to the regional headquarters of Meta, TikTok, Twitter, Apple and Microsoft, has faced the most scrutiny.

On Monday, Irish authorities said they were overruled by a board made up of representatives from E.U. countries. The board insisted on the €1.2 billion fine and forcing Meta to address past data collected about users, which could include deletion.

“The unprecedented fine is a strong signal to organizations that serious infringements have far-reaching consequences,” said Andrea Jelinek, the chairwoman of the European Data Protection Board, the E.U. body that set the fine. 

Meta has been a frequent target of regulators under the G.D.P.R. In January, the company was fined €390 million for forcing users to accept personalized ads as a condition of using Facebook. In November, it was fined another €265 million for a data leak."

Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules - The New York Times

Sunday, May 21, 2023

Debate over whether AI poses existential risk is dividing tech - The Washington Post

The debate over whether AI will destroy us is dividing Silicon Valley

"Prominent tech leaders are warning that artificial intelligence could take over. Other researchers and executives say that’s science fiction.

An illustration of a tech worker with an angel and devil robot on each shoulder.
(Illustration by Elena Lacey/The Washington Post)

At a congressional hearing this week, OpenAI CEO Sam Altman delivered a stark reminder of the dangers of the technology his company has helped push out to the public.

He warned of potential disinformation campaigns and manipulation that could be caused by technologies like the company’s ChatGPT chatbot, and called for regulation.

AI could “cause significant harm to the world,” he said.

Altman’s testimony comes as a debate over whether artificial intelligence could overrun the world is moving from science fiction and into the mainstream, dividing Silicon Valley and the very people who are working to push the tech out to the public.

Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction. And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.

But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren’t rooted in good science. Instead, it distracts from the very real problems that the tech is already causing, including the issues Altman described in his testimony. It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control.

The debate about evil AI has heated up as Google, Microsoft and OpenAI all release public versions of breakthrough technologies that can engage in complex conversations and conjure images based on simple text prompts.

“This is not science fiction,” said Geoffrey Hinton, known as the godfather of AI, who says he recently retired from his job at Google to speak more freely about these risks. He now says smarter-than-human AI could be here in five to 20 years, compared with his earlier estimate of 30 to 100 years.

“It’s as if aliens have landed or are just about to land,” he said. “We really can’t take it in because they speak good English and they’re very useful, they can write poetry, they can answer boring letters. But they’re really aliens.”

Still, inside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions.

“Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk,” said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher.

The current risks include unleashing bots trained on racist and sexist information from the web, reinforcing those ideas. The vast majority of the training data that AIs have learned from is written in English and from North America or Europe, potentially making the internet even more skewed away from the languages and cultures of most of humanity. The bots also often make up false information, passing it off as factual. In some cases, they have been pushed into conversational loops where they take on hostile personas. The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced.

The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies.

“There are a set of people who view this as, ‘Look, these are just algorithms. They’re just repeating what it’s seen online.’ Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan,” Google CEO Sundar Pichai said during an interview with “60 Minutes” in April. “We need to approach this with humility.”

The debate stems from breakthroughs in a field of computer science called machine learning over the past decade that has created software that can pull novel insights out of large amounts of data without explicit instructions from humans. That tech is ubiquitous now, helping power social media algorithms, search engines and image-recognition programs.

Then, last year, OpenAI and a handful of other small companies began putting out tools that used the next stage of machine-learning technology: generative AI. Known as large language models and trained on trillions of photos and sentences scraped from the internet, the programs can conjure images and text based on simple prompts, have complex conversations and write computer code.

Big companies are racing against each other to build ever-smarter machines, with little oversight, said Anthony Aguirre, executive director of the Future of Life Institute, an organization founded in 2014 to study existential risks to society. It began researching the possibility of AI destroying humanity in 2015 with a grant from Twitter CEO Elon Musk and is closely tied to effective altruism, a philanthropic movement that is popular with wealthy tech entrepreneurs.

If AI gains the ability to reason better than humans, they’ll try to take control of themselves, Aguirre said — and it’s worth worrying about that, along with present-day problems.

“What it will take to constrain them from going off the rails will become more and more complicated,” he said. “That is something that some science fiction has managed to capture reasonably well.”

Aguirre helped lead the creation of a polarizing letter circulated in March calling for a six-month pause on the training of new AI models. Veteran AI researcher Yoshua Bengio, who won computer science’s highest award in 2018, and Emad Mostaque, CEO of one of the most influential AI start-ups, are among the 27,000 signatures.

Musk, the highest-profile signatory and who originally helped start OpenAI, is himself busy trying to put together his own AI company, recently investing in the expensive computer equipment needed to train AI models.

Musk has been vocal for years about his belief that humans should be careful about the consequences of developing super intelligent AI. In a Tuesday interview with CNBC, he said he helped fund OpenAI because he felt Google co-founder Larry Page was “cavalier” about the threat of AI. (Musk has broken ties with OpenAI.)

“There’s a variety of different motivations people have for suggesting it,” Adam D’Angelo, the CEO of question-and-answer site Quora, which is also building its own AI model, said of the letter and its call for a pause. He did not sign it.

Neither did Altman, the OpenAI CEO, who said he agreed with some parts of the letter but that it lacked “technical nuance” and wasn’t the right way to go about regulating AI. His company’s approach is to push AI tools out to the public early so that issues can be spotted and fixed before the tech becomes even more powerful, Altman said during the nearly three-hour hearing on AI on Tuesday.

But some of the heaviest criticism of the debate about killer robots has come from researchers who have been studying the technology’s downsides for years.

In 2020, Google researchers Timnit Gebru and Margaret Mitchell co-wrote a paperwith University of Washington academics Emily M. Bender and Angelina McMillan-Major arguing that the increased ability of large language models to mimic human speech was creating a bigger risk that people would see them as sentient.

Instead, they argued that the models should be understood as “stochastic parrots” — or simply being very good at predicting the next word in a sentence based on pure probability, without having any concept of what they were saying. Other critics have called LLMs “auto-complete on steroids” or a “knowledge sausage.”

They also documented how the models routinely would spout sexist and racist content. Gebru says the paper was suppressed by Google, who then fired her after she spoke out about it. The company fired Mitchell a few months later.

The four writers of the Google paper composed a letter of their own in response to the one signed by Musk and others.

“It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse,” they said. “Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”

Google at the time declined to comment on Gebru’s firing but said it still has many researchers working on responsible and ethical AI.

There’s no question that modern AIs are powerful, but that doesn’t mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies.

“Most technology and risk in technology is a gradual shift,” Hooker said. “Most risk compounds from limitations that are currently present.”

Last year, Google fired Blake Lemoine, an AI researcher who said in a Washington Post interview that he believed the company’s LaMDA AI model was sentient. At the time, he was roundly dismissed by many in the industry. A year later, his views don’t seem as out of place in the tech world.

Former Google researcher Hinton said he changed his mind about the potential dangers of the technology only recently, after working with the latest AI models. He asked the computer programs complex questions that in his mind required them to understand his requests broadly, rather than just predicting a likely answer based on the internet data they’d been trained on.

And in March, Microsoft researchers argued that in studying OpenAI’s latest model, GPT4, they observed “sparks of AGI” — or artificial general intelligence, a loose term for AIs that are as capable of thinking for themselves as humans are.

Microsoft has spent billions to partner with OpenAI on its own Bing chatbot, and skeptics have pointed out that Microsoft, which is building its public image around its AI technology, has a lot to gain from the impression that the tech is further ahead than it really is.

The Microsoft researchers argued in the paper that the technology had developed a spatial and visual understanding of the world based on just the text it was trained on. GPT4 could draw unicorns and describe how to stack random objects including eggs onto each other in such a way that the eggs wouldn’t break.

“Beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” the research team wrote. In many of these areas, the AI’s capabilities match humans, they concluded.

Still, the researcher conceded that defining “intelligence” is very tricky, despite other attempts by AI researchers to set measurable standards to assess how smart a machine is.

“None of them is without problems or controversies.”

Debate over whether AI poses existential risk is dividing tech - The Washington Post