More than a few alarmists will tell you that artificial intelligence and robots are going to rule the world in the future. Eventually, we can expect them to do everything from washing our clothes to fighting our wars, with little to no human help except for teaching them how. With machine learning, they may not even need us for that. The artificial intelligence of the future could be more than capable of knowing what to do before we do.
What many don’t know, however, is how artificial intelligence technology is already realizing what we’ve generally assumed to be the distant future. Not only that, but the AI of today can do things that we never imagined it would be capable of even in the future. Not all of these things are scary or alarming, though; a lot may genuinely end up helping us in the long run.
Despite the “AI will take over jobs” worries, there are certain careers that we generally believe will be safe from that takeover, as there are some occupations only people can do. Journalism is definitely one of them, as it takes a human mind to effectively report important information in the form of coherent and well-structured articles for everyone to easily understand. Or so we think, as bots that can write a story as well as a competent journalist already exist.
While there have been attempts to make bots that can write news stories in the past, none of them have been good at it, presumably due to AI’s inherent limitations in doing so. Not anymore, as The Washington Post has already successfully deployed a story-writing bot that can write as well as any of its best journalists. It’s called Heliograf, and all it needs to churn out news pieces are some phrases covering all the potential outcomes of a newsworthy event—like the elections—and a database of events to take the latest updates from.
Most of us have seen RoboCop, a fictional story centered around a cyborg cop that envisions what the future of law enforcement may look like. Except RoboCop isn’t fully robotic; he still has a human brain, which, combined with robotics, turns him into a deadly fighting force.
Many have assumed that robotic cops will be a thing at some point in the future, though we didn’t know that future would be here so soon. Dubai has already put an operational robot serving as a part of the police force on the streets and has creatively named it “Robocop.”
If you think this Robocop isn’t capable of much, you’d be wrong. It was developed with the help of Google and IBM’s supercomputer Watson and can do things like identify criminals, flag problematic vehicle plates, report unattended bags in public areas, and much more. It’s a part of Dubai’s plan to have 25 percent of its police force be robotic by 2030.
As of now, there’s no plan to arm these things, and we’re not really suggesting that the Robocops would pick up guns and rise up against the humans at some point. The technology may even end up being a massive aid to understaffed police departments around the world.
A lot has been said about the unimaginable things AI will be able to do in the future, though if you’re a coder, you’ll know that making it do those things is a lot harder than writing about it. AI developers aren’t just some of the brightest and most talented developers in the world; they’re also among the most highly paid due to their scarcity. It’s quite difficult to write AI software, which is why it’s such a big deal when an AI learns to do just that.
Many firms have meddled with AI designing machine learning software of its own, but it was never better than that of human AI developers until fairly recently. In 2017, Google designed an AI that could design its own AI, and for the first time, the AI it created turned out to be better at a task than software made by the same AI researchers. They used the AI-generated AI to mark locations of multiple objects in a picture and then compared its perfermance to their own AI made for the job. The AI’s software had an accuracy of 43 percent, against the 39 percent of the software the people created.
In case it’s not clear, this means that AI might someday take the jobs of those who design AI as well.
We consider deception to be an inherently human trait, something that machines absolutely can’t do unless they go completely rogue. While we have previously designed AI software that can lie and cut corners, there had been no case of machines learning to do so on their own. That was until some cases of it doing exactly that hit the news over the last year, and, let’s be honest, it’s a little scary.
In one case, researchers tried to get an AI to play Sonic the Hedgehog as a part of their AI retro gaming competition. The conditions it was supposed to meet were simple: Just pass the level as quickly as it could and keep an eye on its competitors in case they overtake. To their surprise, it quickly learned to do that by glitching through walls, which is possibly the first case of an AI learning how to cheat at a game without being designed to do so.
In in another case, involving research by Stanford and Google scientists, an AI designed to convert aerial Google Maps images to street maps was found hiding some of the information in an undetectable, high-frequency signal.
The ability to work with other people is the basis of human society, and it gave us an edge over other, more self-serving creatures in our early days. Of course, that’s not just restricted to wholesome activities like building farms and cities; teamwork also played quite an important role in wars and conquest. It’s therefore natural to assume that if the machines learn how to do it, that’s both an “aww” moment and something to be scared of.
Luckily (or unluckily) for us, AI now has the ability to do just that. Google’s DeepMind project has developed an AI that can work with other AIs in multiplayer games, like Quake III Arena, to win the match, something that it has been trying to get to work for quite some time. While AI has proven its ability to beat human players at video games before, this is the first time it has done so working within a team, which is rather difficult for AI to do, as it requires compromising and matching up with the play styles of other players.
If robots could write poems, the world would largely be the same, as outside of a handful of successful poets who do make their living out of it, it’s not a real job. (Sorry.) Poetry requires an understanding of meter, rhyme, tone, and other things that only a human mind can appreciate, and it’s rather difficult for an AI to learn how to do those things without being explicitly taught.
As it turns out, though, AI is already writing poems most of us wouldn’t be able to distinguish from human-made ones. Take this Shakespearean masterpiece written entirely by a bot as an example:
When I in dreams behold thy fairest shade
Whose shade in dreams doth wake the sleeping morn
The daytime shadow of my love betray’d
Lends hideous night to dreaming’s faded form
Now, an AI didn’t just wake up one morning and make this; an MIT PhD candidate, J. Nathan Matias, taught it the basics of sentence structure and syntax. It took quite a few bad attempts before the AI came up with something original and poem-like.
Another occupation that we believe is safe from being eventually replaced by machines is that of an artist. It’s one of those jobs that you really can’t do without a human brain, as good art can only be effectively made and judged by a human eye. It requires our sense of perception, depth, and shadows, and unless AI can figure that out anytime soon, there’s really no chance machines would be able to do it.
It’s pretty bad news for all the aspiring artists out there, then, that AI has already figured out all of those things and more. In a 2015 experiment by scientists from the Bethge Lab in Germany, they taught their AI to learn the various elements that go into a piece of art and then coded it to replicate that style by painting an image from a photograph. To their surprise, the AI was able to replicate a street in Germany in Van Gogh’s signature style, with a perfect understanding of shadows and types of strokes.
If there truly were to be a global AI takeover, one of the things the AI would require to be successful is the ability to encrypt its own messages. Even then, regardless of whatever encryption techniques an AI could come up with, it’s safe to assume that we would be able to find ways to decrypt them, as we designed the AI in the first place. That is, of course, a very simplistic view of potential ways AIs can take over, though encryption techniques of their own certainly won’t hurt their chances.
As it turns out, AI has already found ways to encrypt messages on its own. In one of Google’s AI research endeavors in 2016, they set up two neural networks, Bob and Alice, trying to communicate with each other. They also added a third one, Eve, to try to decode their conversations. Eve was successful for a while, though in time, Bob and Alice were able to come up with a way to encrypt their messages that even Eve—an impressive AI on its own—couldn’t decrypt. The encryption wasn’t anything like we’ve seen or made on our own before, either, so perhaps Google should really stop doing these things if they know what’s best for all of us.
Philosophy is one of the fields of study that you don’t get until you do, and then you don’t stop talking about it. Until now, delivering a scathing take on the human condition was a strictly human endeavor, though not anymore.
A chatbot developed by Google (again) was reported to have some interesting answers to the big questions of life. No matter where you stand on the issue of artificial intelligence becoming too intelligent, you can’t deny that some of these answers are quite . . . deep:
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
While most chatbots respond according to certain keywords in the questions—and hence can’t convincingly answer them—this one was designed to imitate how a brain works, with branching and interlinked nodes, to reach the best outcome. Its real purpose is to improve Google’s search results, though we wouldn’t mind if someone offered it a book deal or two, either.
A machine’s ability to see what’s going on in someone’s head is right out of science fiction. Most of us wouldn’t know how that could even be done, though previous efforts have been able to decode our thoughts into beeps and vague signals, to an extent. A lot of those signals were unintelligible to a layman, but not anymore.
In a 2017 experiment by Japanese scientists, an AI was able to successfully “see” the image that forms in our heads when we think of something and draw it with a surprising—and scary—accuracy. The images weren’t black and white, unintelligible blobs, either, but were largely close to the images in the subjects’ brains during the ten-month-long experiment.
In another experiment, an AI was able to convert human thoughts into sound signals. When they played it to listeners, they were able to recognize it with a 75-percent accuracy. Sure, AI being able to literally read our minds may be a scary proposition to think about, but it has quite a few good applications, too, like identifying and treating hallucinations in schizophrenic patients.
You can check out Himanshu’s stuff at Cracked and Screen Rant, get in touch with him for writing gigs, or just say hello to him on Twitter.