I have some friends who are studying computer science in college, and a lot of them have become really interested in artificial intelligence. I’m interested, too, and though I really don’t have the same background (I’m an English major, myself), we’ve been having some really interesting conversations about where artificial intelligence is going in the future. Among other things, we’ve been discussing the dangers of A.I. My friends tend to think it’s pretty safe stuff, but I’m a little wary of how much power we’re willing to give machines. I know that “computers take over” is some really sci-fi stuff, but is it nonsense? Am I crazy for thinking that there’s a dangerous side to A.I.?
Artificial intelligence is changing the way we do business and the way we lead our lives. Machine learning and predictive computer technologies have given us computers that “think” in a way that is almost human-like, and the implications for the future of A.I. are staggering. The experts at one company leading the way forward in automation, robotics, and deep learning A.I. to reinvent supply chain logistics, say there’s no reason to expect things to slow down anytime soon: A.I. technologies are building on themselves, and advances are happening more quickly every year.
Artificial intelligence is everywhere now. The auto experts at WheelArea have had to update their maintenance tips for car owners page with each advance in the auto industry, and over the years they say they’ve witnessed an incredible increase in vehicles’ reliance on computers–including, recently, the arrival of A.I. assistants in onboard computers and GPS systems. Next up: self-driving cars, which are already being developed by several tech companies. Imagine the rigor involved with becoming a certified technician for self-driving systems; according to one program advisor, it’s hard enough these days to become a certified automotive specialist, and that will likely become even more difficult with the increased complexity of on-board systems. The increased technological dependence in most modern vehicles’ systems means that much more training and specialization for even a simple routine job, that more technicians are going back to school than ever before.
A.I. is in our phones, too, with Apple’s Siri living on iPhones and competitors offering virtual assistants of their own. Amazon’s Alexa is on multiple platforms, including everything from tablets to their Fire TV streaming devices. There’s nothing unusual about using an A.I. assistant these days, and it’s rapidly becoming the norm. A.I. is taking over customer service tasks, too: experts say that by 2020, 85% of customer service interactions will not involve a human employee.
But just because your fear is pretty much the premise of Terminator doesn’t mean it’s not valid. No less a mind than Stephen Hawking has openly questioned the safety of artificial intelligence, saying we don’t yet know whether we will be “infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.” Hawking cautioned developers to be careful. How possible is any of this? Well, that’s the thing: Hawking and others are saying, quite simply, that we can’t know. You’d be hard-pressed to find a better expert than Hawking to ask, and he has argued quite clearly that the nature of the danger is that it’s unknown. Inventor Elon Musk has made similar arguments, and has even invested $10 million in projects designed to keep A.I. from dominating us. And it’s not just about the apocalypse here–what about the economy? Will A.I. create jobs, as 80% of executives believe? Or will it cost us 7% of our jobs, as some experts claim? The experts are answering, and they’re saying they don’t know–so we should be careful.
The A.I. that we have today is perfectly safe, but future development absolutely comes with some huge responsibilities on the part of computer experts like your friends. If used and developed properly, though, A.I. may have the power to make the world a much better place.
“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” — Elizer Yudkowsky