X/Twitter gave a present to everyone this holiday season: The ability to edit (read: manipulate) photos in many ways without your consent. Its ‘Imagine’ function lets you edit any image posted on the site — whether if it’s your IRL photo, your artwork, et cetera. Generative AI is becoming more and more inevitable at this point.

It’s not just people who are affected by this — brands, companies, newsrooms will also be dragged into this whether they like it or not. For example, The New York Times posts photos on the platform similar to how they do it on Instagram (minus the ’link in bio’ note). I’ve taken one sample from their media tab and used a prompt to make the text different.

The result: Grok was able to replicate the font, as if it was as legit as the original. However, you can notice some subtle alterations in the details.

I’m sounding the alarm as any news outlet can have their images be manipulated by anyone for use in propaganda, slander, misinformation and disinformation. Right now, the only way we can avoid being misinformed or misaligned is to keep following the news sources we trust.

For newsrooms, this means that they need to keep holding on to their verified checkmark as a valued investment, to keep it secure from anyone who wants to crack it open (hacks), and to communicate context properly and concisely.

I did another example, this time for my outlet VTuber NewsDrop: