Lawmakers in the U.S. are pushing for legislation criminalizing deepfake images triggered by the fake explicit photos of Taylor Swift .
As a result of the widespread dissemination of obscene fake photos of Taylor Swift, lawmakers in the United States are advocating for legislation that would make the manufacture of deep-fake images a criminal offense.
Several social media platforms, including X and Telegram among others, shared these photographs. Representative Joe Morelle of the United States of America expressed his extreme displeasure with disseminating the photographs in a post on X, calling it awful with his words.
He drew attention of lawmakers to the Preventing Deepfakes of Intimate Images Act, which is a piece of legislation that he drafted to make deepfakes that are not consented to a federal felony. He also called on lawmakers for immediate action to address this matter.
Deepfakes can alter videos by manipulating the face or body of a person using artificial intelligence (AI). This issue is being addressed by several lawmakers, even though there are no federal laws that specifically target the distribution or development of fake photos.
During an appearance on X, Representative Yvette Clarke claimed that the situation with Taylor Swift is not new. She was made aware that women have been victims of this technology for years, and she emphasized that breakthroughs in artificial intelligence have made the production of deep fakes more accessible and affordable.
The company X has issued a statement in which it indicates that it is aggressively removing the photographs and taking appropriate actions against the accounts that are responsible for disseminating them.
The platform stated in a statement that it closely monitors the situation to promptly resolve any further violations and ensure the removal of information.
As a result of the Online Safety Act that was passed in 2023, the dissemination of fake pornography was made legally prohibited in the United Kingdom by lawmakers.
According to the research titled “State of Deepfakes,” which was published in 2023, the vast majority of deepfakes that are uploaded to the internet contain pornographic material, and nearly 99% of the people who are targeted by such content are female.
The World Economic Forum highlighted the negative implications of artificial intelligence technology in its 19th Global Risks Report. In recent years, concerns about the expansion of material generated by artificial intelligence have grown.
The paper provided an overview of the detrimental effects that advancements in artificial intelligence and related technological capabilities (including generative AI) inflict on individuals, businesses, ecosystems, and economies, whether such effects were intentional or unintended.
Additionally, the Canadian Security Intelligence Service, which is the major national intelligence agency in Canada, has raised concern regarding disinformation tactics on the internet that make use of deepfakes generated by artificial intelligence.
In a report released on June 12th, the United Nations identified artificial intelligence-generated media as a significant and pressing threat to the integrity of information, particularly on social media.
The United Nations has noted an increase in the risk of online disinformation being spread due to rapid technological breakthroughs, particularly in the field of generative artificial intelligence, with a specific focus on deepfakes.