We are living in a time when Artificial Intelligence is dramatically changing the way visual content is created. It may not be long before stock photographers are no longer needed to produce photos for commercial use. Photographers will still create photos for personal use and their own entertainment, but visual content for commercial purposes will be created by graphic artists using AI to tell the story and produce the effects their customer needs.
There will still be a need for some news photographers to photograph real people at real events, but there may be so much fake information distributed that no one will be able to distinguish between what is fake and what is real. There will continue to be some need for photographers to record wedding and family events.
Check out
Generated Photos. With AI they have created a collection of 2,687,848 (and growing) fake head shots of people of all ages, sex and ethnic groups that most viewers would certainly believe are photos of real people. Check out
this link.
To build these photos Generated Photos started with 100,000 “training photos” of diverse people of every age, sex and ethnic group. Many of these were created in their studio. Some were acquired from
Mocha Stock, ()
PICHA, and
Nappy.co.
Then they use a Generative Adversarial Networks (GANs) to produce new images that combine characteristics found in the “training photos,” but are totally different from any of the training photos. There are basically two networks one that generates new photos by mixing and matching elements from the training photos.
The second network is the discriminative network. It’s also trained on the original data, but designed to evaluate whether a specific image is real or fake. Over and over, the generative network makes new and different images with new combinations of the 100,000, and the discriminitive network evaluates each image. Both networks use back propagation to learn from their mistakes and successes. The generative network gets better at creating realistic fakes while the discriminitive network gets better at spotting them. As both improve the quality of the output gets better and better.
It is important to note that none of the generated images look like any individual image training photo, so individuals whose actual images are uses as part of the training will never be able to be identified. (Occasionally, the process using different characteristics from several photos creates a new photo that happens to look like one of the other photos in the training collection. But new photos are constantly checked against this possibility and destroyed when this occurs.)
GANs is a relative
new technique in the AI world, which was only fully demonstrated in 2017. As it ages it will only get better.
These images offer a couple big advantages for end users. No model releases are required. Since they are not of real people there is never any risk of being sued for unauthorized use. They are also cheaper than real photos.
Check out
this link.
;
Video
When it comes to video check out
Synthesia. They can create a realistic video of any individual saying anything – in up to 38 different languages. All you have to do is select one of their AI presenters, type in anything you want to say and in a few minutes you will have a finished video with the presenter saying what was typed. In some cases, the patterns of stress and intonation in a particular language are not perfect, but many large corporations are using this technique to produce training and marketing videos when they need to present their message visually to a worldwide audience.
The technology is fast and cheap when compared to a traditional video production. See this
Wired Magazine story for more information. Customers can also use themselves or staff members to create video content with a familiar face.
One user commented, “In a world with ever increasing demand for new content, Synthesia enables us not only to natively localize content but also to create and update existing videos for brands to engage and communicate more effectively with global consumers”
At the recent DMLA conference Victor Riparbelli of Synthesia showed what the company can do in the way of creating very realistic automobile commercials in dramatic locations. They do this without the need for an actual vehicle (they use models) or being anywhere near the location. ?Such commercials are created in much less time and at a fraction of what it would coat to do it with a real vehicle. And they can create the marketing video before the first actual physical car ever rolls off the assembly line.
Currently, they have an archive of 450,000 locations for background. Anything can be put in front of those backgrounds. The angle of the shoot and the lighting can be digitally altered.
They also showed a product video of a shoe in which they adjusted the camera angle from low to high during the shoot, rotated the product 360 degrees and changed the color and style of the shoe. They can also easily change the background to anything the customer wants, and all for a fraction of the cost of a traditional photo shoot. For more information go to “Synthesia for Agencies” under Solution and Agency at the
Synthesia website.
All this technology is just getting started, but it will develop and improve rapidly and soon eliminate a lot of jobs that have traditionally been done by videographers and photographers.