Just around the corner is an age in which we will be compelled to say that bygone are the days when a photograph was a true depiction of reality. Though photo editing software made its debut in the late 1980s and was limited largely to the advertising industry, the creation and circulation of ‘fake’ images did not achieve currency until a few more decades later. Fast forward to the entry of Artificial Intelligence (AI) and leap frog to the world of Generative AI, things have gone for a toss, especially when it comes to distinguishing between the real and the fake images.
All one need is a capable smartphone to churn out digitally altered images. Deleting a few onlookers from a vacation selfie may be innocuous, but adding disturbing elements to an otherwise normal image is not. For example, the new feature on the Pixel 9 series is way too good at creating disturbing imagery — and the safeguards in place are far too weak, pointed out tech reviewer Allison Johnson, who has a special interest in mobile photography and telecom, writing in The Verge the other day. Google is the latest phone company this year to announce AI photo editing tools, following Samsung’s sketch-to-image feature and Apple’s Image Playground coming this fall. The Pixel 9 has a new tool called Reimagine, and after using it for a week with a few of her colleagues, Allison says she is more convinced than ever that none are ready for what’s coming.
Reimagine is a logical extension of last year’s Magic Editor tools, which lets one select and erase parts of a scene or change the sky to look like a sunset. But in Reimagine, one can select any nonhuman object or portion of a scene and type in a text prompt to generate something in that space. The results are often very convincing and even uncanny. One can add fun stuff, but that’s not the problem, according to Allison. A couple of her colleagues helped her test the boundaries of Reimagine with their Pixel 9 and 9 Pro review units, and they got it to generate some very disturbing things. Some of this required some creative prompting to work around the obvious guardrails. “If you choose your words carefully, you can get it to create a reasonably convincing body under a blood-stained sheet,” Allison wrote.
In their week of testing, they added car wrecks, smoking bombs in public places, sheets that appear to cover bloody corpses, and drug paraphernalia to images. That seems bad. But what is worse is that the software comes built into a phone that can be bought from a store. According to Allison, what’s most troubling about all of this is the lack of robust tools to identify such content on the web. “Our ability to make problematic images is running way ahead of our ability to identify them. When you edit an image with Reimagine, there’s no watermark or any other obvious way to tell that the image is AI-generated — there’s just a tag in the metadata. That’s all well and good, but standard metadata is easily stripped from an image simply by taking a screenshot.”
When Allison and the team asked Google for comment on the issue, company spokesperson Alex Moriconi responded with the following statement: “Pixel Studio and Magic Editor are helpful tools meant to unlock your creativity with text to image generation and advanced photo editing on Pixel 9 devices. We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so. That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.” Is this reassuring enough? The jury is out.