New service will rate the authenticity of digital images
By the time an image makes its way online, it could have been opened and processed in any number of applications, passed through various hands, and been remixed and manipulated.
Today a new image hosting service, Izitru, is launching to give people new ways to certify the authenticity of a digital image. It’s also a tool that journalists can use to help verify images.
The Izitru website and iOS app can “distinguish an original JPEG file captured with a digital camera from subsequent derivations of that file that may have been changed in some way,” according to the company.
It mixes forensic image analysis with elements of crowdsourcing and human oversight. Izitru also has an API that will enable other services to integrate its technology.
The service is a new offering from Fourandsix Technologies, Inc., a company I previously wrote about. It’s founded by Kevin Connor, a former vice president of product management for Photoshop, and Dr. Hany Farid, a leading image forensics expert. Their initial product, FourMatch, was a verification extension for Adobe Photoshop.
Anyone can use Izitru as a place to host their images and to have their photos subjected to a series of six forensic tests that result in a publicly visible “trust rating.” The Izitru iOS app can also take photos and have them uploaded directly to the site. Watch it in action:
One important note about the six tests the site performs: they are geared toward “proving that a file is the original from a camera, rather than trying to prove it has been manipulated,” Connor said. It’s not about determining whether something has been Photoshopped.
These automated tests help with one important element of photo verification: provenance. You want to know who took the image and whether that image came directly from a digital camera. By shooting with the Izitru app, it ensures the photo is an original from the phone’s camera. The Izitru website can also be used by journalists to upload and test a photo.
“From a journalism standpoint, one of the challenges … is that once files get distributed on social media sites, they automatically get re-compressed and modified to the point that we can't verify them any more,” Connor told me in an email.
Images are also often scraped and altered, making it incredibly difficult to determine the original creator.
Others have recognized this problem. Scoopshot, a crowdsourced photography service, last year launched a photography app with an authenticity rating system. Vice journalist Tim Pool recently launched Taggly, an app that watermarks and attributes images before they get shared online.
A ‘trust rating’ for images
Connor said that with Izitru they want to “encourage people to verify their important photos before they're distributed. This uses an evolution of the same technology that is in our first product, FourMatch, but with the addition of five additional forensic tests.”
In addition to those tests, which result in a trust rating being added, anyone viewing the image can push a “challenge” button to indicate their view that the image may not be authentic. Enough challenges will result in Connor’s team doing additional analysis. If they determine the image has been manipulated, they will apply a No Trust rating. (The No Trust rating can only be applied after human analysis.)
Their ratings from high to low are: High Trust, Medium Trust, Undetermined File History, Potential File Modification and No Trust.
“Though we can't commit to looking at every challenged file, we'll certainly look at any file that gets a significant number of challenges,” Connor said.
At that point, we can apply some of our other tests--such as clone detection, lighting analysis, etc. If we see a reason to adjust our rating, then we'll do so and add a note to this effect on the page. If we see clear evidence the image content has been manipulated, then we'll apply a No Trust rating. The Challenge button is a community feedback mechanism for us that will allow us to continue to refine our automated testing approach as well.
It’s only by challenging an image and getting the Izitru team to perform additional tests and analysis that possible manipulation can be detected.
“Unfortunately, the tests that detect specific signs of manipulation can be more open to interpretation, so they don't currently lend themselves to automated usage by people who aren't trained analysts,” Connor told me.
Connor acknowledged that the world of photo apps and upload sites is very competitive. People will need to first know Izitru exists, and then feel inclined to use it in the moment when they’re snapping that important or newsworthy image.
That’s why his team also built an Izitru API to enable other applications to connect to the service and take advantage of its analysis capabilities.
“With [the API], sites that are already getting volumes of images uploaded for sharing could integrate our tests and badge the most trustworthy images as they come in,” he said.
Since the product has only just launched, there yet aren’t any API integrations to share.
He did however say that “a stealth citizen journalism startup” has expressed interest in an integration.
It will be interesting to watch whether they can forge partnerships that begin to spread their trust rating, or if partners don't see this as enough of a value add. Social networks and apps, for example, prefer to verify users rather than play any part in rating or verifying content.
Those aren't the only possible partners, of course — but they are where images are shared and engaged with on a huge scale. Will any of them see an advantage in building in an additional trust layer?