Microsoft scrambles to update its free AI software after Taylor Swift deepfakes scandal – saimmalik

Microsoft cracked down on using the corporate’s free AI software program after the software was linked to creating the sexually explicit deepfake images of Taylor Swift that swamped social media – and raised the specter of a lawsuit by the infuriated singer.

The tech big pushed an replace to its in style software, known as Designer – a text-to-image program powered by OpenAI’s Dall-E 3  – that provides  “guardrails” that can forestall using non-consensual images, the corporate stated.

The pretend images – exhibiting a nude Swift surrounded by Kansas Metropolis Chiefs gamers in a reference to her highly-publicized romance with Travis Kelce – had been traced again to Microsoft’s Designer AI earlier than they started circulating on X, Reddit and different web sites, tech-focused site 404 Media reported on Monday.

“We’re investigating these stories and are taking applicable motion to deal with them,” a Microsoft spokesperson informed 404 Media, which first reported on the replace.

“We have now massive groups engaged on the event of guardrails and different security methods according to our accountable AI rules, together with content material filtering, operational monitoring and abuse detection to mitigate misuse of the system and assist create a safer surroundings for customers,” the spokesperson added, noting that per the corporate’s Code of Conduct, any Designer customers who create deepfakes will lose entry to the service.

Microsoft stripped its Designer software from with the ability to produce AI-generated nude photographs after pretend, specific images after deepfakes of Taylor Swift at a Kansas Metropolis Chiefs recreation circulated on social media in an obvious reference to her relationship with Travis Kelce. Getty Pictures

Representatives for Microsoft didn’t instantly reply to The Submit’s request for remark.

The replace comes as Microsoft CEO Satya Nadella stated tech firms have to “transfer quick” to crack down on the misuse of synthetic intelligence instruments. 

Nadella, whose firm is a key investor in ChatGPT creator OpenAI, described the unfold of pretend pornographic photographs of the “Merciless Summer season” singer as “alarming and horrible.”

“We have now to behave. And fairly frankly, all of us within the tech platform, no matter what your standing on any specific challenge is,” Nadella stated, in line with a transcript ahead of an interview on NBC Nightly News interview, which can air Tuesday.

“I don’t suppose anybody would need an internet world that’s fully not protected for each content material creators and content material customers.” 

The Swift deepfakes had been considered greater than 45 million instances on X earlier than lastly being eliminated after about 17 hours.

A supply near Swift was appalled “the social media platform even allow them to be as much as start with,” the Every day Mail reported, particularly contemplating X’s Assist Middle outlines insurance policies that prohibit posting “artificial and manipulated media” in addition to “non-consensual nudity.”

Over the weekend, Elon Musk’s social media platform took the extraordinary step of blocking any searches involving Swift’s identify from yielding outcomes — even those who had been innocent.

Microsoft added extra guardrails to its synthetic intelligence picture generator on the heels of CEO Satya Nadella warning that tech firms have to “transfer quick” to crack down on the misuse of AI. Getty Pictures

X govt Joe Benarroch described the transfer as a “short-term motion and executed with an abundance of warning as we prioritize security on this challenge.”

The ban remained in impact Monday.

The controversy might imply one other headache for Microsoft and different AI leaders who’re already dealing with mounting authorized, legislative and regulatory scrutiny over the burgeoning know-how.

White Home Press Secretary Karine Jean-Pierre described the deepfakes pattern as “very alarming” and stated the Biden administration was “going to do what we are able to to take care of this challenge.”

The rise of AI deepfakes might emerge as a key theme later this week when Meta CEO Mark Zuckerberg, TikTok CEO Shou Chew and different outstanding tech bosses testify earlier than a Senate panel.

Legislators in New York and New Jersey have been working to make the nonconsensual sharing of AI-generated pornographic photographs a federal crime, with imposable penalties like jail, a fantastic or each. AFP through Getty Pictures

Earlier this month, Rep. Joseph Morelle (D-NY) and Tom Kean (R-NJ) reintroduced a bill that would make the nonconsensual sharing of digitally altered pornographic photographs a federal crime, with imposable penalties like jail time, a fantastic or each.

The “Stopping Deepfakes of Intimate Pictures Act” was referred to the Home Committee on the Judiciary, however the committee has but to decide on whether or not or to not move the invoice.


Leave a Reply

Your email address will not be published. Required fields are marked *