All fake AI photos on Facebook and Instagram will be tagged.

Labeling of any false AI photographs will be done by Facebook and Instagram.

0 140

Meta plans to recognize and categorize AI-generated photographs from other firms. It will be on Facebook, Instagram, and Threads.

Meta labels AI pictures from its systems. It thinks the new software it is producing would provide “momentum” for the industry to fight AI fakery.

However, an AI specialist told the BBC that such technologies are “easily evadable”.

Meta executive Sir Nick Clegg blogged that it will flag AI fakes “in the coming months”.

He told Reuters that the technology was “not yet fully mature” but that the corporation wants to “create a sense of momentum and incentive for the rest of the industry to follow”.

‘Easy escape’

However, University of Maryland Reliable AI Lab head Prof. Soheil Feizi stated such a system may be straightforward to navigate.

“They may be able to educate their sensor to be able recognize some images specifically created by particular models,” he stated.

“But the detectors can be readily circumvented by lightweight image processing, and they can contain many false positives.

“So I don’t think that it’s possible for a broad range of applications.”

Meta says their tool won’t function for audio and video, despite these being the medium most concerned about AI fakes.

The company says it “may apply penalties if they fail to do so” and asks users to mark their audio and video postings.

Sir Nick Clegg also stated that ChatGPT-generated text cannot be tested.

“That ship has sailed,” he told Reuters.

Incoherent media policy

The company’s altered media policy was deemed “confused, lacking in compelling justification and improperly focused on how it has been created” by the Meta Oversight Board on Monday.

Meta funds but does not control the Oversight Board.

A verdict on a Joe Biden video prompted the criticism. The video manipulated footage of the president with his granddaughter to make him appear to touch her improperly.

It didn’t violate Meta’s altered media policy because it wasn’t modified using artificial intelligence and showed Mr. Biden acting differently than he did.

Although the video did not violate Meta’s false media standards, the Board recommended updating them.

Sir Nick told Reuters he supported the verdict.

The current Meta policy “is just simply not fit for purpose in an environment where you’re going to have way more synthetic content and hybrid content than before.”

The firm has required political ads to disclose digitally manipulated imagery and video since January.

Leave A Reply

Your email address will not be published.