Google Search’s AI falsely said Obama is a Muslim. Now it’s turning off some results

nexninja
5 Min Read


New York
CNN
 — 

Google promised its new artificial intelligence search tools would “do the give you the results you want” and make discovering info on-line faster and simpler. However simply days after the launch, the corporate is already strolling again some factually incorrect outcomes.

Google earlier this month launched an AI-generated search outcomes overview instrument, which summarizes search outcomes in order that customers don’t need to click on by means of a number of hyperlinks to get fast solutions to their questions. However the function got here underneath hearth this week after it offered false or deceptive info to some customers’ questions.

For instance, a number of customers posted on X that Google’s AI abstract stated that former President Barack Obama is a Muslim, a common misconception. Actually, Obama is a Christian. One other consumer posted {that a} Google AI abstract stated that “none of Africa’s 54 acknowledged nations begin with the letter ‘Okay’” — clearly forgetting Kenya.

Google confirmed to CNN on Friday that the AI overviews for each queries had been eliminated for violating the corporate’s insurance policies.

“The overwhelming majority of AI Overviews present prime quality info, with hyperlinks to dig deeper on the net,” Google spokesperson Colette Garcia stated in a press release, including that another viral examples of Google AI flubs seem to have been manipulated photos. “We carried out intensive testing earlier than launching this new expertise, and as with different options we’ve launched in Search, we admire the suggestions. We’re taking swift motion the place applicable underneath our content material insurance policies.”

The underside of every Google AI search overview acknowledges that “generative AI is experimental.” And the corporate says it conducts testing designed to mimic potential dangerous actors in an effort to forestall false or low-quality outcomes from displaying up in AI summaries.

Google’s search overviews are a part of the corporate’s bigger push to include its Gemini AI expertise throughout all of its merchandise because it makes an attempt to maintain up within the AI arms race with rivals like OpenAI and Meta. However this week’s debacle exhibits the chance that including AI – which tends to confidently state false info – might undermine Google’s status because the trusted supply to seek for info on-line.

Even on much less critical searches, Google’s AI overview seems to generally present flawed or complicated info.

In a single check, CNN requested Google, “how a lot sodium is in pickle juice.” The AI overview responded that an 8 fluid ounce-serving of pickle juice accommodates 342 milligrams of sodium however {that a} serving lower than half the scale (3 fluid ounces) contained greater than double the sodium (690 milligrams). (Greatest Maid pickle juice, for sale at Walmart, lists 250 milligrams of sodium in simply 1 ounce.)

CNN additionally searched: “information used for google ai coaching.” In its response, the AI overview acknowledged that “it’s unclear if Google prevents copyrighted supplies from being included” within the on-line information scraped to coach its AI fashions, referencing a serious concern about how AI corporations function.

It’s not the primary time Google has needed to stroll again the capabilities of its AI instruments over an embarrassing flub. In February, the company paused the power of its AI picture generator to create photos of individuals after it was blasted for producing traditionally inaccurate photos that largely confirmed individuals of colour rather than White individuals.

Google’s Search Labs webpage lets customers in areas the place AI search overviews have rolled out toggle the function on and off.

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *