On Thursday, Meta's oversight board urged the tech giant to modernize its rules on porn deepfakes, moving away from the 'photoshop' era and embracing artificial intelligence. The independent board, known as the supreme court for Meta's content moderation decisions, made this recommendation after examining two cases involving deepfake images of prominent women from India and the United States. In one instance, a deepfake shared on Instagram remained despite a complaint, while in the other, the altered image was prohibited on Meta's platform. Both decisions led to appeals to the board.
The board determined that the deepfakes in both cases breached a Meta rule against 'derogatory sexualized photoshop,' which should be simplified for better comprehension. Meta defines this term as involving manipulated images that are sexualized in ways likely to be unwelcome by those depicted. Photoshop, an image editing software released in 1990, became a ubiquitous term for image manipulation. However, the board concluded that referencing 'photoshop' in rules about porn deepfakes is too restrictive given the advent of generative AI, which can create images or videos from simple text prompts.
The oversight board advised Meta to explicitly state that it does not permit AI-generated or manipulated non-consensual sexual content. Although Meta has committed to following the board's decisions on specific content moderation issues, it views policy suggestions as recommendations that it may implement at its discretion.