AI means that anyone can become a victim of deepfake pornography. Here is how you can defend yourself.
AI means anyone can be a victim of deepfake porn. Here’s how to protect yourself
Artificial intelligence continues to advance in the modern world, but with these developments come several deep-fake pornographic woes. This type of harassment can threaten anyone, even by simply not having ever taken or shared nude photos.
It is now possible to artificially superimpose a person’s face onto another nude body or modify existing pictures to make it appear as though an individual is undressed using advanced AI technology. The likes of Taylor Swift and Congresswoman Alexandria Ocasio-Cortez have been stopped this year along with other high school students from using these images in their public and private spheres.
Such AI-generated sexual images horrify and terrify the targeted individuals. It usually becomes more difficult for the young ones, who lack the know-how on the online terrain and juggle the feelings of fear and confusion. Luckily, there are mechanisms that victims can use to protect themselves and resources they could access to seek knowledge on further actions. The first thing that should be done by a person stumbling on AI-generated sexual images is to capture the screenshot of the image. While the first impulse is to delete it as fast as possible from the internet, one must also collect evidence for potentially useful reporting to authorities about any criminal activity. Main online platforms such as Google, Meta, or Snapchat have the official way to request that the explicit images be removed. They can also use sites like StopNCII.org or Take It Down, both of which are non-profit organizations that can take requests to get rid of the image on multiple sites at once.
In an affirmative development, a bipartisan coalition of senators raised its collective voice in August when it sent an open letter to several tech companies, including the likes of X and Discord. These institutions were urged to participate in combating programs against nonconsensual explicit imagery and deepfakes. This very unusual cause for concern has garnered support from all quarters in the legislature, including from Republican Senator Ted Cruz, whose proposed bill would criminalize the distribution of such images without consent. The legislation would require social-media platforms to remove the website material when notified by the injured parties. So, hopefully, through joint efforts, this will lead to more exceptional invention of protections against this misuse within technology in targeting individuals wrongly.
Thank you for reading this post, don't forget to follow my whatsapp channel
Discover more from TechKelly
Subscribe to get the latest posts sent to your email.
Comments are closed.