Featured In Adobe Collection 05 Human & Machine
In the Academy Award-winning film “Ex Machina,” audiences are drawn into a trusting relationship between Ava, an artificial intelligence (AI)-powered robot, and its creators, Caleb and Nathan. That is until Ava murders Nathan and leaves Caleb trapped in the laboratory while she flies away in a waiting helicopter to enjoy a life outside of walled confines.
These days, what we might assume is a far-flung science fiction movie plot might have actually happened. In 2016, a Russian-built AI robot prototype called Promobot IR77 escaped the lab in which it was being developed. Although it was captured, it still made other attempts to leave. These examples beg the question: Can AI backfire on humans?
It’s a question that has troubled us since the days of Greek mythology, where legends of mechanical domestics and intelligent robots first surfaced. Now, in the 21st century, AI is everywhere — in smart toys, digital assistants, medical devices, vehicles, even your refrigerator. AI is the new “big thing” and companies across industries are leaping in feet first. The process of adopting new technology comes with some question marks, though. The technology industry and its customers are grappling with what AI boundaries to set as they determine whether these advancements will become friend or foe.
“Companies are trying to figure how best to implement AI and whether they should embrace it or fear it,” says Daya Nadamuni, senior manager, corporate strategy and competitive intelligence at Adobe. “First, they have to understand what it is, then they have to learn how to use it. We are in the early stages of AI. As the understanding and usage evolve, in another five years, I expect AI to be mainstream. For technology companies, AI is expressing itself in new software products, features, and services. Adobe is a great example of that.” Read our Primer on Artificial Intelligence.
Working in sync
Certainly, AI-revved digital assistants such as Amazon’s Alexa, Apple’s Siri, and the Google Assistant are more commonplace as they infiltrate our daily lives and work. They are helping to spur greater productivity and creativity.
“There are companies that are looking at AI from a general AI perspective. There are other companies very focused on data — on understanding it, and even understanding and influencing consumer behaviour,” says Lars Trieloff, principal scientist at Adobe. “Only Adobe is bringing together the whole gamut of experiences, which means understanding content, understanding creativity, and enhancing creativity through AI. This approach to AI, which is powered by Adobe Sensei, not only facilitates creating new experiences, but it supports what Lars calls enlightening experiences. “Experiences that are not just good looking, but experiences that educate, that entertain, or that persuade depending on what you need,” he says.
Everything AI-related — self-driving cars, quantum computing, a travel concierge, medical progress, chatbots, farming, Amazon.com, etc. — boils down to data. “How do you take large amounts of data and make sense of it so that you can better optimize what customers see and do?” asks Anil Kamath, fellow and VP, technology at Adobe. “That’s clearly something that humans are not able to do with large amounts of data.”
Anil uses the example of sending an email or loading a webpage where it’s not possible for humans to be involved at such a fast pace. “Machines help, and what I’ve been pushing for is this idea of AI being an intelligence augmentation or intelligence unification. Human intelligence will always be number one, but with machine learning and artificial intelligence, you can amplify it. You can do much better at certain tasks by combining those two things. We don’t see it as AI — we see it as IA, which is intelligence amplification,” Anil says. Looking through the telescope Gavin Miller, fellow and vice president of Adobe Research, offers a sneak peek into where AI and machine-learning frameworks might be headed for Adobe Sensei and photo editing.
“Imagine replacing an entire building on one side of a street scene. Sensei will assess the images and come up with options that would fit best into that shape, and then wrap it and adjust it so that it looks like it blends in,” he says. “It’s just gestural input, and feels like traditional editing tools, but under the hood, it’s doing a lot of smart reasoning.”
Gavin points out how, if an algorithm recognizes mistakes and corrections, it gets smarter every time someone uses it and thereby improves over time with more data. Adding to the AI technology is talking about generative adversarial networks (GANs) — where one AI algorithm communicates with another algorithm without human intervention — that result in a smarter machine.“GANs are truly one of the biggest breakthroughs of the last few years in AI,” Anil says.
First introduced in 2014, GANs use a two neural system — a generator and a discriminator — at odds with — or adversarial to — each other. In other words, it’s AI that makes itself smarter.A common example of GANs is when AI is used to create realistic photos from text descriptions. In this case, the generator produces photos that look real, while the discriminator attempts to distinguish the sample photos from real data. In essence, the generator factor is trying to trick the discriminator and increase its error rate. The expected outcome is that the generative network will produce better photos, and the discriminative network better weeds out the fake ones.
And, while Ava used her intelligence for bad, AI can use its superpowers for good.“We believe AI is going to be incredibly powerful, and as Spider-Man’s Uncle, Ben said, ‘With great power comes great responsibility,’” Lars says. “We all have to take this responsibility seriously — of trying to do the right thing, and of having a culture that encourages openness and encourages honesty.”
Read more about our future with artificial intelligence in our Human & Machine collection.