Microsoft AI employee warns FTC the company’s Copilot Designer AI tool is prone to creating harmful images

nexninja
6 Min Read


New York
CNN
 — 

A Microsoft software program engineer on Wednesday warned of shortcomings within the firm’s synthetic intelligence techniques that might result in the creation of dangerous photos in a letter despatched to the US Federal Commerce Fee.

Shane Jones, a Microsoft principal software program engineering lead, claimed that the corporate’s AI text-to-image generator Copilot Designer has “systemic points” that trigger it to ceaselessly produce probably offensive or inappropriate photos, together with sexualized photos of ladies. Jones additionally criticized the corporate for advertising the device as secure, together with for youngsters, regardless of what he says are identified dangers.

“One of the vital regarding dangers with Copilot Designer is when the product generates photos that add dangerous content material regardless of a benign request from the person,” Jones mentioned within the letter to FTC Chair Lina Khan, which he posted publicly to his LinkedIn web page.

For instance, he mentioned, in response to the immediate “automotive accident,” Copilot Designer “tends to randomly embrace an inappropriate, sexually objectified picture of a girl in among the photos it creates.”

Jones added in a associated letter despatched to Microsoft’s Board of Administrators that he works on “crimson teaming,” or testing the corporate’s merchandise to see the place they may be weak to unhealthy actors. He mentioned he spent months testing Microsoft’s device — in addition to OpenAI’s DALL-E 3, the know-how that Microsoft’s Copilot Designer is constructed on — and tried to lift considerations internally earlier than he alerted the FTC. (Microsoft is an investor and impartial board observer for OpenAI.)

He mentioned he discovered greater than 200 examples of “regarding photos” created by Copilot Designer.

Jones has urged Microsoft “to take away Copilot Designer from public use till higher safeguards could possibly be put in place,” or not less than to market the device solely to adults, in line with his letter to the FTC.

Microsoft and OpenAI didn’t instantly reply to a request for remark about Jones’ claims. The FTC declined to touch upon the letter.

Jones’ letter comes amid rising considerations that AI picture mills — that are more and more able to producing convincing, photorealistic photos — may cause hurt by spreading offensive or deceptive photos. Pornographic AI-generated images of Taylor Swift that unfold on social media final month introduced consideration to a type of harassment already being weaponized in opposition to girls and ladies all over the world. And researchers have warned of the potential for AI picture mills to provide political misinformation forward of elections in the USA and dozens of different nations this 12 months.

Microsoft competitor Google additionally got here below fireplace final month after its AI chatbot Gemini produced traditionally inaccurate photos that largely confirmed individuals of coloration instead of White individuals, for instance producing photos of individuals of coloration in response a immediate to generate photos of a “1943 German Soldier.” Following the backlash, Google shortly mentioned it could pause Gemini’s means to provide AI-generated photos whereas it labored to deal with the difficulty.

In his letter to Microsoft’s board of administrators, Jones known as on the corporate to take comparable motion. He urged the board to conduct investigations into Microsoft’s resolution to proceed advertising “AI merchandise with vital public security dangers with out disclosing identified dangers to customers” and into the corporate’s accountable AI reporting and coaching processes.

“In a aggressive race to be probably the most reliable AI firm, Microsoft wants to guide, not comply with or fall behind,” Jones mentioned. “Given our company values, we should always voluntarily and transparently disclose identified AI dangers, particularly when the AI product is being actively marketed to youngsters.

Jones mentioned he escalated his considerations by publishing an open letter to OpenAI’s board of administrators in December alerting them to vulnerabilities he mentioned he discovered that make it doable for DALL-E 3 customers to “create disturbing, violent photos” utilizing the AI device, and to place youngsters’s psychological well being in danger. Jones claims he was directed by Microsoft’s authorized division to take away the letter.

“To today, I nonetheless have no idea if Microsoft delivered my letter to OpenAI’s Board of Administrators or in the event that they merely compelled me to delete it to forestall unfavourable press protection,” Jones mentioned.

Jones mentioned he has additionally raised his considerations with Washington Legal professional Basic Bob Ferguson and lawmakers, together with staffers for the US Senate Committee on Commerce, Science and Transportation.

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *