No one is prepared for this
Reimagine: Is it Bad to be Fooled by the Artifact Industry? The Hive Impacts Screening of Google and Other Social Media Tools
This erosion of the social consensus began before the Pixel 9, and it will not be carried forth by the Pixel 9 alone. Still, the phone’s new AI capabilities are of note not just because the barrier to entry is so low, but because the safeguards we ran into were astonishingly anemic. The industry’s proposed artificial intelligence image watermarking standard isbogged down in the usual standards, and it was nowhere to be found when The Hive tried out their new Magic Editor. The photos that are modified with the Reimagine tool simply have a line of removable metadata added to them. (The inherent fragility of this kind of metadata was supposed to be addressed by Google’s invention of the theoretically unremovable SynthID watermark.) We found the Reimagine tool, which modifies existing photos, to be much more frightening than the outputs of the pure prompt generator, which will be tagged with a SynthID watermark.
Reimagine is an extension of last year’s Magic Editor tools, which allow you to modify the sky or select and erase portions of a scene. It was nothing shocking. Reimagine kicks the door down, so it doesn’t just take a step further. You can select any nonhuman object or portion of a scene and type in a text prompt to generate something in that space. The results are often very convincing and even uncanny. The photo’s lighting, shadows, and perspective match the original one. There’s fun stuff that can be added, like wildflowers or rainbows. But that’s not the problem.
Car wrecks, bombs in public places, sheets that appear to cover bloodied corpses, and drug paraphernalia were added to our week of testing. That seems bad. As a reminder, this isn’t some piece of specialized software we went out of our way to use — it’s all built into a phone that my dad could walk into Verizon and buy.
Google claims the Pixel 9 will not be an unfettered bullshit factory but is thin on substantive assurances. When a user tells a Generative artificial intelligence tool to create content they want, the tool may create content that is offensive. “That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. We are continually refining and enhancing the safeguards we have in place, at times, we can face some challenges.
The creative prompting to work around filters is a clear violation of the policies. It’s also a violation of Safeway’s policies to ring up your organic peaches as conventionally grown at the self-checkout, not that I know anyone who would do that. And someone with the worst intentions isn’t concerned with Google’s terms and conditions, either. What’s most troubling about all of this is the lack of robust tools to identify this kind of content on the web. Our ability to make problematic images is running way ahead of our ability to identify them.
Fooling the Media: The Mistakes of a Photograph taken by an Artificial Intelligence (AI) Insight Using Magic Editor
There is no way to tell if the image is Artificial intelligence, because there is just a tag in the description. That’s all well and good, but standard metadata is easily stripped from an image simply by taking a screenshot. Since they’re 100 percent synthetic, Moriconi says thatgoogle uses a more robust system called SynthID to tag images. But images edited with Magic Editor don’t get those tags.
This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We don’t know what will happen after.
Misleading photos can be easily brought to your attention quickly. You can use the same cameras you use to take pictures and publish them online in a way that will make them appear better. We uploaded a picture from our reworked images to an Instagram story and took it down in less than an hour. Meta didn’t tag it automatically as AI-generated, and I’m sure nobody would have been the wiser if they’d seen it.
Who knows, maybe everyone will read and abide by Google’s AI policies and use Reimagine to put wildflowers and rainbows in their photos. That would be lovely! If they don’t, then it’s a good idea to apply extra skepticism to photos you see online.
Photography has been used in the service of deception for as long as it has existed. (Consider Victorian spirit photos, the infamous Loch Ness monster photograph, or Stalin’s photographic purges of IRL-purged comrades.) But it would be disingenuous to say that photographs have never been considered reliable evidence. A photograph was a default representation of the truth in the era where everyone who read this article grew up. There are a number of potential deceptions that are outliers in the realm of possibility, and a staged scene with movie effects was one of them. It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph. The exception, not the rule, was fake.
According to an interview with Wired, the group product manager for the Pixel camera described the editing tool as a way to help you create the perfect moment, that is authentic to your memory. A photo is no longer a supplement to fallible human recollection but a mirror of it. When photographs become less interesting, the dumbEST shit will go into a courtroom fight over witnesses and corroborating evidence.
Even before AI, those of us in the media had been working in a defensive crouch, scrutinizing the details and provenance of every image, vetting for misleading context or photo manipulation. After all, every major news event comes with an onslaught of misinformation. The new paradigm shift implicates something far more fundamental than the suspicion that is a part of digital literacy.
The persistent cry of “Fake News!” The beginning of this era of unmitigated bullshit was presaged by Trumpist quarters due to their belief that the impact of the truth will be deadened by the firehose of lies. The next Abu Ghraib will be buried under a sea of AI-generated war crime snuff. The next George Floyd will go unnoticed and unvindicated.
An alligator in a pizzeria, a silly costume over a cat, an additional tree in a backdrop: for most of the images created by these tool, they will be pretty harmless. The deluge changes the way that we view the concept of the photo altogether. The United States has seen an extraordinary social upheaval in recent times caused by the videos of police brutality. There were videos that told the truth where the authorities hid reality.
The Truth About Photons: Detecting Adversarial Images of Wars, Revolutions, and Adolescent Hosts
The onus has always been on those denying the truth of a photo to prove their claims. The flat-earther is out of step with the social consensus not because they do not understand astrophysics — how many of us actually understand astrophysics, after all? They have to justify why certain photographs and videos are not real. They must invent a vast state conspiracy to explain the steady output of satellite photographs that capture the curvature of the Earth. The 1969 Moon landing requires a soundstage.
No one on the planet has lived in a world where photographs were the most important factor in determining social consensus, for as long as any of us has been here. Think about the ways in which the assumed integrity of a photograph has enabled the truth of your experiences. The preexisting ding in the fender of your rental car. The leak in your ceiling. The arrival of a package. An actual, non-AI-generated cockroach in your takeout. When wildfires encroach upon your residential neighborhood, how do you communicate to friends and acquaintances the thickness of the smoke outside?
If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. The images have defined wars and revolutions, they are impossible to fully express. There was no reason to say why the photos matter or why they are so important. Our trust in photography was so deep that when we spent time discussing veracity in images, it was more important to belabor the point that it was possible for photographs to be fake, sometimes.
Source: No one’s ready for this
The explosion from an old brick building. A cockroach box in a box of takeout, and an explosion from the side of an old building
Anyone who buys a Pixel 9 — the latest model of Google’s flagship phone, available starting this week — will have access to the easiest, breeziest user interface for top-tier lies, built right into their mobile device. This is going to be the norm with similar features already available on competing devices as well as on other devices in the future. When a phone just works, it is usually a good thing, but here it is the problem in the first place.
An explosion from the side of an old brick building. A bike is in an intersection. A cockroach in a box of takeout. All the images were created in less than 10 seconds with the Reimagine tool in the Magic Editor. They are crisp. They are all painted in the same color. They are high-fidelity. There is no evidence of a sixth finger. These photographs are fake, and are all very convincing.