HomeBuisnessTaylor Swift's deep nudity shows how badly we need AI regulation: experts ...
- Advertisment -

Taylor Swift’s deep nudity shows how badly we need AI regulation: experts PiPa News

- Advertisment -

Taylor Swift’s deep nudity shows how badly we need AI regulation: experts

Last week, AI-generated images depicting superstar Taylor Swift in sexually suggestive and revealing positions spread across the internet, sparking panic and condemnation—and experts say it’s a wake-up call. -up call that shows we need real AI regulation now.

Mohit Rajhans, Think Start media and tech consultant, told CTV News Channel on Sunday that “we have become the wild west online,” when it comes to creating and spreading AI content.

“The train has left the station, artificial general intelligence is here, and it’s up to us now to figure out how we’re going to regulate it.”

It reportedly took 17 hours to remove the fake images spread by X.

The terms “Taylor Swift,” “Taylor Swift AI,” and “Taylor AI” currently bring up error reports when a user tries to search for them on X. The company says this is a temporary fix. measure as they check the safety of the platform.

But the deeply pornographic images of the singer were viewed tens of millions of times before social media sites took action. Deepfakes are AI-generated images and videos of fake situations that show real people. The big danger is that they are more realistic than a photoshopped image.

“There is a lot of potential harassment and misinformation spread if this technology is not regulated,” Rajhans said.

Swift’s targeting is part of a disturbing trend of AI being used to create pornographic images of people without their consent, a practice known as “revenge porn” that is often used against women and girls.

While AI has been abused for years, Rajhans said there is a “Taylor effect” in making people sit up and take notice of the problem.

“What’s happening is… due to the use of Taylor Swift’s image to do everything from selling products she doesn’t belong to to doctoring her (image) in various sexual acts, many people know- how widespread this technology is,” he said. .

Even the White House is paying attention, commenting on Friday that action must be taken.

In a statement on Friday, White House press secretary Karine Jean-Pierre said the spread of fake Swift nudes is “alarming” and that legislative action is being considered to better address these situations in the future. .

“There needs to be legislation, obviously, to deal with this issue,” he said, without specifying which specific legislation he supported.

SAG-AFTRA, the union that represents thousands of actors and performers, said in a statement Saturday that they support the proposed bill introduced by US Rep. Joe Morelle last year, called the Preventing Deepfakes of Intimate Images Act.

“The development and dissemination of fake images – especially those of an obscene nature – without a person’s consent should be made illegal,” the union said in a statement.

At the White House briefing, Jean-Pierre added that social media platforms “have an important role in enforcing their own rules” to prevent the spread of “non-consensual intimate imagery of real people.”

Rajhans said on Sunday that it is clear that social media companies must act to deal with deep fakes.

“We need to hold social media companies accountable,” he said. “There should be a lot of heavy fines related to some of these social media companies. They make a lot of money off of people using social media.”

He pointed out that if people upload a song that is not his, there are ways to flag it on social media sites.

“So why aren’t they using this technology now in an effort to moderate social media so deepfakes don’t get through?” he said.

A 2023 report on deepfakes found that 98 percent of all deepfakes online were pornographic in nature—and 99 percent of individuals targeted by deepfakes were women. Singers and actresses in South Korea are disproportionately targeted, making up 53 percent of individuals targeted for deep pornography.

The report emphasizes that there is technology today that allows users to create 60 seconds of deep pornographic video for free and in less than half an hour.

The speed of development taking place in the world of AI is working against us in terms of managing the effects of this technology, said Rajhans.

“It’s getting so pedestrian that you and I can just make memes and share them and no one will know the difference between (if) it’s the real truth or it’s something that’s been recreated,” he said.

“It’s not just about Taylor Swift. It’s about harassment, it’s about sharing fake news, it’s about an entire culture that needs to be educated on how this technology is being used.”

It is not known how long it will take to see legislation in Canada restricting deep counterfeiters.

The Canadian Security Intelligence Service called deepfakes a “threat to Canada’s future” in a 2023 report concluding that “collaboration with partner governments, allies, academia, and industry experts is essential to maintain the integrity of globally distributed information and address malicious matters. application of emerging AI.”

Although a proposed regulatory framework for AI systems in Canada is currently being reviewed by the House of Commons, called the Artificial Intelligence and Data Act, it will not take effect this year. If the bill gets royal assent, a consultation process will begin to clarify AIDA, with the framework coming soon to 2025.

RELATED ARTICLES
- Advertisment -

Most Popular