It’s well established that deepfake images of people are problematic, but it’s now clearer that bogus satellite imagery could also represent a threat. The Verge reports that University of Washington-led researchers have developed a way to generate deepfake satellite photography as part of an effort to detect manipulated images.
The team used an AI algorithm to generate deepfakes by feeding the traits of learned satellite images into different base maps. They could use Tacoma’s roads and building locations, for example (at top right in the picture below), but superimpose Beijing’s taller buildings (bottom right) or Seattle’s low-rises (bottom left). You can apply greenery, too. While the execution isn’t flawless, it’s close enough that scientists believe you might blame any oddities on low image quality.
Lead author Bo Zhao was quick to note there could be positive uses for deepfaked satellite snapshots. You could simulate locations from the past to help understand climate change, study urban sprawl or predict how a region will evolve by filling in blanks.
However, there’s little doubt the AI-created fakes could be used for misinformation. A hostile country could send falsified images to mislead military strategists — they might not notice a missing building or bridge that could be a valuable target. Fakes could also be used for political aims, such hiding evidence of atrocities or suppressing climate science.
Researchers hope this work will help develop a system to catch satellite deepfakes in the same way that early work exists to spot human-oriented fakes. However, it might be a race against time — it didn’t take long for early deepfake tech to escape from academia into the real world, and that might well happen again.