It won’t be long until commercial image users, and possibly news outlets, will no longer need photos taken by human photographers. This will particularly be true for photos of people. Every image will be digitally created from existing photos and nothing will have any relevance to reality.
Individuals will still be able to take a picture of their children and store them away for future personal reference. Whenever the photographer looks at that photo he will know that it is a rendition of their child taken at a particular time and place because he created the image. But, if the creator decides to show that photo to someone else who may or may not know the child the viewer will have no idea whether it is an accurate rendition of he child, or even a picture of an actual human that at some time existed.
In the new world of Generative Adversarial Networks (
GANs) it will be possible to use algorithmic architectures of two neural networks, pitted one against the other (thus the “adversarial”) to generate new, synthetic instances of data that can pass as real data. The new image (or video or voice) will look real, but have no basis whatsoever with anything that has ever been real.
This is accomplished by using a number real images to “train” the GAN to produce something new that looks real, but has no basis in reality. There will be no way for an outsider to know what images were used in the “training” process, and thus no way to protect the copyright of the creators of the images used in the training process.
One of the topics discussed at the DMLA Annual Digital Media Licensing Conference was the subject of Deep Fake Photos and Generated Adversarial Networks. Paul Reinitz, Director of Advocacy and Legal Operations Counsel at Getty Images gave the presentation. Readers can watch a video
here.
Some of the early GANs were produced by a Russian company called Generated Photos. The results can be seen
here.
Of course, Generated Photos, or Deepfakes, can be used for any purpose without worrying about copyrights, distribution rights, infringement claims, or royalties. Some feel that the copyright holders of original images used in “training” the algorithm should be compensated for the use of their work. But, realistically there is no way to enforce such an ideas since the creators of the Deepfakes can grab any photos they need off the Internet and use it for their purposes without anyone being the wiser. No part of the original will be recognized in the final “Fake.” The fact that a photo used for training may be a decade or more old makes no difference in how the final “Fake” might look.
Some of the issues Reinitz raised in his presentation include:
Can copyright exist in synthetic content?
Is permission needed from a copyright owner before using an image is a training set?
Are synthetic images a derivative work of the images used as training data?
What privacy rights are implicated?
Should the law regulate the use of synthetic content?
Should there be tracking for what training data is used?
Is it really possible to have copyright protection for synthetic content or to know what the underlying information was for a synthetic image?
I’m afraid the answer to all of these questions is that there is no practical way to implement such rules.
For more on this topic check out this
FastCompany story.
If you want to get an idea of how fast these developments might come about you might want to read this
Fortune Magazine story on “The Quest For Human Level A.I.”