Media Sites Should Prevent AI Misinformation

A fake image of an explosion near the Pentagon was shared by a verified Twitter account, @Deltaone.

by Marceline Meador ‘24

Several recent major developments in open-source AI technology have opened the floodgates for opportunities and innovations in this new field. However, as is the case with all open-source software, this technology can be misused. While text-based AI software such as ChatGPT, Davinci, and Bingchat have their own potential exploits, image-based software such as Midjourney, DALL-E, and Stable Diffusion opens up an entirely new realm of risk. The ability to create realistic and near-perfect images out of thin air makes it possible to easily spread misinformation and fool the public.

Twitter was the first to act upon this realization with an addendum to their “Community Notes” feature. While this feature was previously exclusive to text-based posts, Twitter amended it to allow users to add notes under possible photo-based misinformation posts. This feature allows for a more free method of community-based moderation in order to combat fabricated images and false information. This feature is incredibly beneficial to the average social media user, as several incidents of AI-fabricated photos have caused confusion on the platform before.

An example of this can be seen in an incident occurring prior to Twitter’s photo notes feature, in which false information claiming an attack on the Pentagon was spread around the platform. One of the most beneficial features of Twitter’s image community notes is an AI-based image recognition function, in which images that are detected to be similar to an image deemed misinformation will be labeled with the same community note. This will aid users approved to submit community notes byeliminating the need to repeatedly add tags and notes to the same image, should it be reposted.

Twitter, however, is not the only platform seeking awareness of AI-generated content. YouTube has come forward with a policy of its own. In a similar vein, it does not advocate for the direct removal of AI-generated content, but the awareness of it. While Twitter’s AI content policy depends on users in the community discovering and labeling AI content, YouTube’s policy operates by requiring users to disclose whether or not content is AI-generated prior to uploading it. While this is a step in the right direction, assuming those posting misleading AI-generated content will openly admit to it is as naive as it is foolish. It is the responsibility of a platform to ensure that users do not post harmful misinformation, as it is not a mere possibility, but a definite eventuality.