Contact Me By Email

Saturday, July 22, 2023

Canon R6 II vs Sony a7 IV feat. Ted Forbes!

How to Direct A.I. Chatbots to Make Them More Useful - The New York Times

We’re Using A.I. Chatbots Wrong. Here’s How to Direct Them.

"To mitigate the production and spread of misinformation from chatbots, we can steer them toward high-quality data.

An illustration in shades of blue, green and pink depicts a chain of open books, arranged like an accordion. The pages in the last book are breaking apart.
Ariel Davis

By Brian X. Chen

Brian X. Chen, The Times’s personal tech columnist, tested dozens of A.I. tools over the past two months.

Anyone seduced by A.I.-powered chatbots like ChatGPT and Bard — wow, they can write essays and recipes! — eventually runs into what are known as hallucinations,the tendency for artificial intelligence to fabricate information.

The chatbots, which guess what to say based on information obtained from all over the internet, can’t help but get things wrong. And when they fail — by publishing a cake recipe with wildly inaccurate flour measurements, for instance — it can be a real buzzkill.

Yet as mainstream tech tools continue to integrate A.I., it’s crucial to get a handle on how to use it to serve us. After testing dozens of A.I. products over the last two months, I concluded that most of us are using the technology in a suboptimal way, largely because the tech companies gave us poor directions.

The chatbots are the least beneficial when we ask them questions and then hope whatever answers they come up with on their own are true, which is how they were designed to be used. But when directed to use information from trusted sources, such as credible websites and research papers, A.I. can carry out helpful tasks with a high degree of accuracy.

“If you give them the right information, they can do interesting things with it,” said Sam Heutmaker, the founder of Context, an A.I. start-up. “But on their own, 70 percent of what you get is not going to be accurate.”

With the simple tweak of advising the chatbots to work with specific data, they generated intelligible answers and useful advice. That transformed me over the last few months from a cranky A.I. skeptic into an enthusiastic power user. When I went on a trip using a travel itinerary planned by ChatGPT, it went well because the recommendations came from my favorite travel websites.

Directing the chatbots to specific high-quality sources like websites from well-established media outlets and academic publications can also help reduce the production and spread of misinformation. Let me share some of the approaches I used to get help with cooking, research and travel planning.

Meal Planning

Chatbots like ChatGPT and Bard can write recipes that look good in theory but don’t work in practice. In an experiment by The New York Times’s Food desk in November, an early A.I. model created recipes for a Thanksgiving menu that included an extremely dry turkey and a dense cake.

I also ran into underwhelming results with A.I.-generated seafood recipes. But that changed when I experimented with ChatGPT plug-ins, which are essentially third-party apps that work with the chatbot. (Only subscribers who pay $20 a month for access to ChatGPT4, the latest version of the chatbot, can use plug-ins, which can be activated in the settings menu.)

On ChatGPT’s plug-ins menu, I selected Tasty Recipes, which pulls data from the Tasty website owned by BuzzFeed, a well-known media site. I then asked the chatbot to come up with a meal plan including seafood dishes, ground pork and vegetable sides using recipes from the site. The bot presented an inspiring meal plan, including lemongrass pork banh mi, grilled tofu tacos and everything-in-the-fridge pasta; each meal suggestion included a link to a recipe on Tasty.

For recipes from other publications, I used Link Reader, a plug-in that let me paste in a web link to generate meal plans using recipes from other credible sites like Serious Eats. The chatbot pulled data from the sites to create meal plans and told me to visit the websites to read the recipes. That took extra work, but it beat an A.I.-concocted meal plan.

Research

When I did research for an article on a popular video game series, I turned to ChatGPT and Bard to refresh my memory on past games by summarizing their plots. They messed up on important details about the games’ stories and characters.

After testing many other A.I. tools, I concluded that for research, it was crucial to fixate on trusted sources and quickly double-check the data for accuracy. I eventually found a tool that delivers that: Humata.AI, a free web app that has become popular among academic researchers and lawyers.

The app lets you upload a document such as a PDF, and from there a chatbot answers your questions about the material alongside a copy of the document, highlighting relevant portions.

In one test, I uploaded a research paper I found on PubMed, a government-run search engine for scientific literature. The tool produced a relevant summary of the lengthy document in minutes, a process that would have taken me hours, and I glanced at the highlights to double-check that the summaries were accurate.

Cyrus Khajvandi, a founder of Humata, which is based in Austin, Texas, developed the app when he was a researcher at Stanford and needed help reading dense scientific articles, he said. The problem with chatbots like ChatGPT, he said, is that they rely on outdated models of the web, so the data may lack relevant context.

Travel Planning

When a Times travel writer recently asked ChatGPT to compose a travel itinerary for Milan, the bot guided her to visit a central part of town that was deserted because it was an Italian holiday, among other snafus.

I had better luck when I requested a vacation itinerary for me, my wife and our dogs in Mendocino County, Calif. As I did when planning a meal, I asked ChatGPT to incorporate suggestions from some of my favorite travel sites, such as Thrillist, which is owned by Vox, and The Times’s travel section.

Within minutes, the chatbot generated an itinerary that included dog-friendly restaurants and activities, including a farm with wine and cheese pairings and a train to a popular hiking trail. This spared me several hours of planning, and most important, the dogs had a wonderful time.

Bottom Line

Google and OpenAI, which works closely with Microsoft, say they are working to reduce hallucinations in their chatbots, but we can already reap A.I.’s benefits by taking control of the data that the bots rely on to come up with answers.

To put it another way: The main benefit of training machines with enormous data sets is that they can now use language to simulate human reasoning, said Nathan Benaich, a venture capitalist who invests in A.I. companies. The important step for us, he said, is to pair that ability with high-quality information.

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix, a column about the social implications of the tech we use. Before joining The Times in 2011, he reported on Apple and the wireless industry for Wired. More about Brian X. Chen"


How to Direct A.I. Chatbots to Make Them More Useful - The New York Times

How Do the White House’s A.I. Commitments Stack Up? - The New York Times

How Do the White House’s A.I. Commitments Stack Up?

"Seven leading A.I. companies made eight promises about what they’ll do with their technology. Our columnist sizes up their potential impact.

President Biden addresses a room from a lectern in the White House. In the foreground are the backs of the heads of three observers.
President Biden speaking on Friday about commitments that seven companies made to manage the risks of artificial intelligence.Kenny Holston/The New York Times

By Kevin Roose

Kevin Roose examines the intersection of technology, business and culture.

This week, the White House announced that it had secured “voluntary commitments” from seven leading A.I. companies to manage the risks posed by artificial intelligence.

Getting the companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — to agree to anything is a step forward. They include bitter rivals with subtle but important differences in the ways they’re approaching A.I. research and development.

Meta, for example, is so eager to get its A.I. models into developers’ hands that it has open-sourced many of them, putting their code out into the open for anyone to use. Other labs, such as Anthropic, have taken a more cautious approach, releasing their technology in more limited ways.

But what do these commitments actually mean? And are they likely to change much about how A.I. companies operate, given that they aren’t backed by the force of law?

Given the potential stakes of A.I. regulation, the details matter. So let’s take a closer look at what’s being agreed to here and size up the potential impact.

Commitment 1: The companies commit to internal and external security testing of their A.I. systems before their release.

Each of these A.I. companies already does security testing — what is often called “red-teaming” — of its models before they’re released. On one level, this isn’t really a new commitment. And it’s a vague promise. It doesn’t come with many details about what kind of testing is required, or who will do the testing.

In a statement accompanying the commitments, the White House said only that testing of A.I. models “will be carried out in part by independent experts” and focus on A.I. risks “such as biosecurity and cybersecurity, as well as its broader societal effects.”

It’s a good idea to get A.I. companies to publicly commit to continue doing this kind of testing, and to encourage more transparency in the testing process. And there are some types of A.I. risk — such as the danger that A.I. models could be used to develop bioweapons — that government and military officials are probably better suited than companies to evaluate.

I’d love to see the A.I. industry agree on a standard battery of safety tests, such as the “autonomous replication” tests that the Alignment Research Center conducts on prereleased models by OpenAI and Anthropic. I’d also like to see the federal government fund these kinds of tests, which can be expensive and require engineers with significant technical expertise. Right now, many safety tests are funded and overseen by the companies, which raises obvious conflict-of-interest questions.

A New Generation of Chatbots

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

Commitment 2: The companies commit to sharing information across the industry and with governments, civil society and academia on managing A.I. risks.

This commitment is also a bit vague. Several of these companies already publish information about their A.I. models — typically in academic papers or corporate blog posts. A few of them, including OpenAI and Anthropic, also publish documents called “system cards,” which outline the steps they’ve taken to make those models safer.

But they have also held back information on occasion, citing safety concerns. When OpenAI released its latest A.I. model, GPT-4, this year, it broke with industry customs and chose not to disclose how much data it was trained on, or how big the model was (a metric known as “parameters”). It said it declined to release this information because of concerns about competition and safety. It also happens to be the kind of data that tech companies like to keep away from competitors.

Under these new commitments, will A.I. companies be compelled to make that kind of information public? What if doing so risks accelerating the A.I. arms race?

I suspect that the White House’s goal is less about forcing companies to disclose their parameter counts and more about encouraging them to trade information with one another about the risks that their models do (or don’t) pose.

But even that kind of information-sharing can be risky. If Google’s A.I. team prevented a new model from being used to engineer a deadly bioweapon during prerelease testing, should it share that information outside Google? Would that risk giving bad actors ideas about how they might get a less guarded model to perform the same task?

Commitment 3: The companies commit to investing in cybersecurity and insider-threat safeguards to protect proprietary and unreleased model weights.

This one is pretty straightforward, and uncontroversial among the A.I. insiders I’ve talked to. “Model weights” is a technical term for the mathematical instructions that give A.I. models the ability to function. Weights are what you’d want to steal if you were an agent of a foreign government (or a rival corporation) who wanted to build your own version of ChatGPT or another A.I. product. And it’s something A.I. companies have a vested interest in keeping tightly controlled.

There have already been well-publicized issues with model weights leaking. The weights for Meta’s original LLaMA language model, for example, were leaked on 4chan and other websites just days after the model was publicly released. Given the risks of more leaks — and the interest that other nations may have in stealing this technology from U.S. companies — asking A.I. companies to invest more in their own security feels like a no-brainer.

Commitment 4: The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their A.I. systems.

I’m not really sure what this means. Every A.I. company has discovered vulnerabilities in its models after releasing them, usually because users try to do bad things with the models or circumvent their guardrails (a practice known as “jailbreaking”) in ways the companies hadn’t foreseen.

The White House’s commitment calls for companies to establish a “robust reporting mechanism” for these vulnerabilities, but it’s not clear what that might mean. An in-app feedback button, similar to the ones that allow Facebook and Twitter users to report rule-violating posts? A bug bounty program, like the one OpenAI started this year to reward users who find flaws in its systems? Something else? We’ll have to wait for more details.

Commitment 5: The companies commit to developing robust technical mechanisms to ensure that users know when content is A.I. generated, such as a watermarking system.

This is an interesting idea but leaves a lot of room for interpretation. So far, A.I. companies have struggled to devise tools that allow people to tell whether or not they’re looking at A.I. generated content. There are good technical reasons for this, but it’s a real problem when people can pass off A.I.-generated work as their own. (Ask any high school teacher.) And many of the tools currently promoted as being able to detect A.I. outputs really can’t do so with any degree of accuracy.

I’m not optimistic that this problem is fully fixable. But I’m glad that companies are pledging to work on it.

Commitment 6: The companies commit to publicly reporting their A.I. systems’ capabilities, limitations, and areas of appropriate and inappropriate use.

Another sensible-sounding pledge with lots of wiggle room. How often will companies be required to report on their systems’ capabilities and limitations? How detailed will that information have to be? And given that many of the companies building A.I. systems have been surprised by their own systems’ capabilities after the fact, how well can they really be expected to describe them in advance?

Commitment 7: The companies commit to prioritizing research on the societal risks that A.I. systems can pose, including on avoiding harmful bias and discrimination and protecting privacy.

Committing to “prioritizing research” is about as fuzzy as a commitment gets. Still, I’m sure this commitment will be received well by many in the A.I. ethics crowd, who want A.I. companies to make preventing near-term harms like bias and discrimination a priority over worrying about doomsday scenarios, as the A.I. safety folks do.

If you’re confused by the difference between “A.I. ethics” and “A.I. safety,” just know that there are two warring factions within the A.I. research community, each of which thinks the other is focused on preventing the wrong kinds of harms.

Commitment 8: The companies commit to develop and deploy advanced A.I. systems to help address society’s greatest challenges.

I don’t think many people would argue that advanced A.I. should not be used to help address society’s greatest challenges. The White House lists “cancer prevention” and “mitigating climate change” as two of the areas where it would like A.I. companies to focus their efforts, and it will get no disagreement from me there.

What makes this goal somewhat complicated, though, is that in A.I. research, what starts off looking frivolous often turns out to have more serious implications. Some of the technology that went into DeepMind’s AlphaGo — an A.I. system that was trained to play the board game Go — turned out to be useful in predicting the three-dimensional structures of proteins, a major discovery that boosted basic scientific research.

Overall, the White House’s deal with A.I. companies seems more symbolic than substantive. There is no enforcement mechanism to make sure companies follow these commitments, and many of them reflect precautions that A.I. companies are already taking.

Still, it’s a reasonable first step. And agreeing to follow these rules shows that the A.I. companies have learned from the failures of earlier tech companies, which waited to engage with the government until they got into trouble. In Washington, at least where tech regulation is concerned, it pays to show up early."

How Do the White House’s A.I. Commitments Stack Up? - The New York Times

Thursday, July 20, 2023

Insta360 Go 3: Did You Fall For It?

Apple warns iMessage and FaceTime could be withdrawn in UK over law change

Apple warns iMessage and FaceTime could be withdrawn in UK over law change

US tech firm says giving government oversight of security changes could endanger encrypted products

A woman with bright orange fingernails holds a silver iPhone
‘End-to-end encryption is the core security technology for FaceTime and iMessage,’ said Apple.Photograph: Artur Widak/NurPhoto/Rex/Shutterstock

“Apple has warned that planned changes to UK surveillance laws could affect iPhone users’ privacy by forcing it to withdraw security features, which could ultimately lead to the closure of services like FaceTime and iMessage in the country.

The US tech firm has become a vocal opponent of what it views as government moves against online privacy, following a statement last month that provisions in the forthcoming online safety bill endanger message encryption.

Apple’s latest concerns centre on the Investigatory Powers Act 2016, which gives the Home Office the power to seek access to encrypted content via a “technology capability notice” [TCN], which requires the removal of “electronic protection” of data.

End-to-end encryption, which ensures only the sender and recipient of a message can see its content, is a key tech privacy feature and is a hard-fought battleground between the government and tech firms.

Apple said the changes included a provision that would give the government oversight of security changes to its products, including regular iOS software updates. The Home Office consultation proposes “mandating” operators to notify the home secretary of changes to a service that could have a “negative impact on investigatory powers”.

Apple wrote in a submission to the government that such a move would effectively grant the home secretary control over security and encryption updates globally, when allied to further proposals strengthening requirements for non-UK companies to implement changes worldwide if – like Apple – they operate via a global platform.

The proposals would “make the Home Office the de facto global arbiter of what level of data security and encryption are permissible”, Apple wrote.

Apple also expressed concern over a proposed amendment that, it says, would allow the government to immediately block implementation of a security feature while a TCN is being considered, instead of letting the feature continue to be used pending an appeal.

In comments implying encrypted products like FaceTime and iMessage could ultimately be endangered in the UK, Apple said it never built a “backdoor” into its products for the government to use, and would withdraw security features in the UK market instead. End-to-end encryption is the core security technology for FaceTime and iMessage and is viewed by Apple as an intrinsic part of those services.

“Together, these provisions could be used to force a company like Apple, that would never build a backdoor, to publicly withdraw critical security features from the UK market, depriving UK users of these protections,” said Apple.

In further comments, the company said the proposals would “result in an impossible choice between complying with a Home Office mandate to secretly install vulnerabilities into new security technologies (which Apple would never do), or to forgo development of those technologies altogether and sit on the sidelines as threats to users’ data security continue to grow”.

Alan Woodward, a professor of cybersecurity at Surrey University, who has signed an open letter warning against online safety bill proposals that could dilute encryption, said Apple’s submission on the 12-week consultation represented a “stake in the ground”.

He added: “If the government push on regardless then Apple will simply join the growing band of vendors that would leave the UK. British users could end up as one of the most isolated, and insecure, groups in the world. In that scenario nobody wins.”

The Apple submission comes after the House of Lords on Thursday approved a government amendment on the online safety bill related to scrutiny of encrypted messaging. Under the amendment Ofcom, the communications watchdog, must await a report from a “skilled person” before ordering a messaging service to use “accredited technology” – which could enable the scanning of message content, for example to identify child sexual abuse material.

The provision in the bill is widely seen by privacy campaigners as a means of potentially forcing platforms like WhatsApp and Signal to break or weaken end-to-end encryption.

Dr Nathalie Moreno, a partner at UK law firm Addleshaw Goddard specialising in data protection, cybersecurity and AI, said there was “almost no information” available about how detailed the report to Ofcom would be, and whether the “skilled person” would be a political appointment or technical expert.

“Once the government has been granted powers to intercept private messaging services, that’s it – there’s no going back,” she said.

The NSPCC, the children’s safety charity, has warned that the “shrill” debate over the online safety bill is “losing sight” of the safety rights of child sexual abuse victims.

The Home Office has been contacted for comment.“

Wednesday, July 19, 2023

Two-faced star with helium and hydrogen sides baffles astronomers | Astronomy | The Guardian

Two-faced star with helium and hydrogen sides baffles astronomers

The white dwarf appears to have one side composed almost entirely of hydrogen and the other side made up of helium. It is the first time that astronomers have discovered a lone star that appears to have spontaneously developed two contrasting faces.

“The surface of the white dwarf completely changes from one side to the other,” said Dr Ilaria Caiazzo, an astrophysicist at Caltech who led the work. “When I show the observations to people, they are blown away.”

The object, which is more than 1,000 light years away in the Cygnus constellation, has been nicknamed Janus, after the two-faced Roman god of transition, although its formal scientific name is ZTF J1901+1458. It was initially discovered by the Zwicky Transient Facility (ZTF), an instrument that scans the skies every night from Caltech’s Palomar Observatory near San Diego.

Caiazzo was searching for white dwarfs and one candidate star stood out due to its rapid changes in brightness. Further observations revealed that Janus was rotating on its axis every 15 minutes. Spectrometry measurements, which give the chemical fingerprints of a star, showed that one side of the object contained almost entirely hydrogen and the other almost entirely helium.

If seen up close, both sides of the star would be bluish in colour and have a similar brightness, but the helium side would have a grainy, patchwork appearance like that of our own sun, while the hydrogen side would appear smooth.

Astrophysicist explains 'strange' white dwarf two-faced star – video

The star’s two-faced nature is difficult to explain as its exterior is made of swirling gas. “It’s hard for anything to be separated,” Caiazzo said.

One explanation is that Janus could be undergoing a rare transition that has been predicted to occur during white dwarf evolution.

White dwarfs are the simmering remains of stars that were once like our sun. As the stars age, they puff up into red giants. Eventually, the fluffy outer material is blown away and the core contracts into a dense, fiery, hot white dwarf with roughly the mass of our sun while being only the size of Earth.

The star’s intense gravitational field causes heavier elements to sink to the core and the lighter elements to float, creating a two-tier atmosphere of helium below, topped with a thin layer of hydrogen (the lightest element). When the star cools below about 30,000C (54,032F), the thicker helium layer begins to bubble, causing the outer hydrogen layer to get mixed in, dilute and disappear from view.

“Not all but some white dwarfs transition from being hydrogen- to helium-dominated on their surface,” Caiazzo said. “We might have possibly caught one such white dwarf in the act.”

If so, the scientists believe that an asymmetric magnetic field could be causing the transition to occur in a lopsided way. “If the magnetic field is stronger on one side, it could be limiting convection [bubbling in the helium layer],” Caiazzo said. “On the other side, convection could be winning out and so the hydrogen layer has been lost.”

The findings are published in the journal Nature."

Two-faced star with helium and hydrogen sides baffles astronomers | Astronomy | The Guardian

Tuesday, July 18, 2023

A ChatGPT That Recognizes Faces? OpenAI Worries World Isn’t Ready. - The New York Times

OpenAI Worries About What Its Chatbot Will Say About People’s Faces

"An advanced version of ChatGPT can analyze images and is already helping the blind. But its ability to put a name to a face is one reason the public doesn’t have access to it.

OpenAI’s logo carved into a table in the office’s reception area.
OpenAI’s logo at its offices in San Francisco. The company is testing an image analysis feature for its ChatGPT chatbot. Jim Wilson/The New York Times

The chatbot that millions of people have used to write term papers, computer code and fairy tales doesn’t just do words. ChatGPT, the artificial-intelligence-powered tool from OpenAI, can analyze images, too — describing what’s in them, answering questions about them and even recognizing specific people’s faces. The hope is that, eventually, someone could upload a picture of a broken-down car’s engine or a mysterious rash and ChatGPT could suggest the fix.

What OpenAI doesn’t want ChatGPT to become is a facial recognition machine.

For the last few months, Jonathan Mosen has been among a select group of people with access to an advanced version of the chatbot that can analyze images. On a recent trip, Mr. Mosen, an employment agency chief executive who is blind, used the visual analysis to determine which dispensers in a hotel room bathroom were shampoo, conditioner and shower gel. It went far beyond the performance of image analysis software he had used in the past.

“It told me the milliliter capacity of each bottle. It told me about the tiles in the shower,” Mr. Mosen said. “It described all of this in a way that a blind person needs to hear it. And with one picture, I had exactly the answers that I needed.”

For the first time, Mr. Mosen is able to “interrogate images,” he said. He gave an example: Text accompanying an image that he came across on social media described it as a “woman with blond hair looking happy.” When he asked ChatGPT to analyze the image, the chatbot said it was a woman in a dark blue shirt, taking a selfie in a full-length mirror. He could ask follow-up questions, like what kind of shoes she was wearing and what else was visible in the mirror’s reflection.

“It’s extraordinary,” said Mr. Mosen, 54, who lives in Wellington, New Zealand, and has demonstrated the technology on a podcast he hosts about “living blindfully.

In March, when OpenAI announced GPT-4, the latest software model powering its A.I. chatbot, the company said it was “multimodal,” meaning it could respond to text and image prompts. While most users have been able to converse with the bot only in words, Mr. Mosen was given early access to the visual analysis by Be My Eyes, a start-up that typically connects blind users to sighted volunteers and provides accessible customer service to corporate customers. Be My Eyes teamed up with OpenAI this year to test the chatbot’s “sight” before the feature’s release to the general public.

Recently, the app stopped giving Mr. Mosen information about people’s faces, saying they had been obscured for privacy reasons. He was disappointed, feeling that he should have the same access to information as a sighted person.

The change reflected OpenAI’s concern that it had built something with a power it didn’t want to release.

The company’s technology can identify primarily public figures, such as people with a Wikipedia page, said Sandhini Agarwal, an OpenAI policy researcher, but does not work as comprehensively as tools built for finding faces on the internet, such as those from Clearview AI and PimEyes. The tool can recognize OpenAI’s chief executive, Sam Altman, in photos, Ms. Agarwal said, but not other people who work at the company.

Making such a feature publicly available would push the boundaries of what was generally considered acceptable practice by U.S. technology companies. It could also cause legal trouble in jurisdictions, such as Illinois and Europe, that require companies to get citizens’ consent to use their biometric information, including a faceprint.

Additionally, OpenAI worried that the tool would say things it shouldn’t about people’s faces, such as assessing their gender or emotional state. OpenAI is figuring out how to address these and other safety concerns before releasing the image analysis feature widely, Ms. Agarwal said.

“We very much want this to be a two-way conversation with the public,” she said. “If what we hear is like, ‘We actually don’t want any of it,’ that’s something we’re very on board with.”

Beyond the feedback from Be My Eyes users, the company’s nonprofit arm is also trying to come up with ways to get “democratic input” to help set rules for A.I. systems.

Ms. Agarwal said the development of visual analysis was not “unexpected,” because the model was trained by looking at images and text collected from the internet. She pointed out that celebrity facial recognition software already existed, such as a tool from Google. Google offers an opt-out for well-known people who don’t want to be recognized, and OpenAI is considering that approach.

Ms. Agarwal said OpenAI’s visual analysis could produce “hallucinations” similar to what had been seen with text prompts. “If you give it a picture of someone on the threshold of being famous, it might hallucinate a name,” she said. “Like if I give it a picture of a famous tech C.E.O., it might give me a different tech C.E.O.’s name.”

The tool once inaccurately described a remote control to Mr. Mosen, confidently telling him there were buttons on it that were not there, he said.

Microsoft, which has invested $10 billion in OpenAI, also has access to the visual analysis tool. Some users of Microsoft’s A.I.-powered Bing chatbot have seen the feature appear in a limited rollout; after uploading images to it, they have gotten a message informing them that “privacy blur hides faces from Bing chat.”

Sayash Kapoor, a computer scientist and doctoral candidate at Princeton University, used the tool to decode a captcha, a visual security check meant to be intelligible only to human eyes. Even while breaking the code and recognizing the two obscured words supplied, the chatbot noted that “captchas are designed to prevent automated bots like me from accessing certain websites or services.”

“A.I. is just blowing through all of the things that are supposed to separate humans from machines,” said Ethan Mollick, an associate professor who studies innovation and entrepreneurship at the University of Pennsylvania’s Wharton School.

Since the visual analysis tool suddenly appeared in Mr. Mollick’s version of Bing’s chatbot last month — making him, without any notification, one of the few people with early access — he hasn’t shut down his computer for fear of losing it. He gave it a photo of condiments in a refrigerator and asked Bing to suggest recipes for those ingredients. It came up with “whipped cream soda” and a “creamy jalapeño sauce.”

Both OpenAI and Microsoft seem aware of the power — and potential privacy implications — of this technology. A spokesman for Microsoft said that the company wasn’t “sharing technical details” about the face-blurring but was working “closely with our partners at OpenAI to uphold our shared commitment to the safe and responsible deployment of AI technologies.”


A ChatGPT That Recognizes Faces? OpenAI Worries World Isn’t Ready. - The New York Times