Artificial intelligence is a complex ecosystem in which complexity is built into the very things that are supposed to be autonomous. It’s built into the fact that machines can teach themselves—and don’t simply follow instructions like robots do today. We’ve seen this with language translations that don’t always follow polite niceties, but which are so accurate that they’re a useful replacement for human-made languages.
There’s this unfortunate idea that AI is inherently flawed, or that humans should prepare for the fact that machines could outsmart them. Because things can go wrong, correct them and then get smarter.
Tough to argue with that. And yet, on the same side of that polarizing view, there are those of us who, despite our hopes and wishful thinking, are of the opinion that AI is as flawed as humans, as complex as our own blunders, and as messy as the human condition. It would be naive to conclude that machine learning and AI will always be without its proverbial pitfalls—or that these pitfalls will be exact copies of the human way. In the era of AI, that’s not the goal, but it’s not so trivial to disentangle the black and white goodness from the gray stuff that doesn’t quite make sense to the naked eye.
Artificial intelligence is often painted as a bogeyman by those for whom the abstract or esoteric hold the potential to drive them mad. It’s seen as the end of all that, a slippery slope to complete robot control.
But it’s not clear that we’re really headed toward true robot supremacy, or for that matter all humanity.
Fake News courtesy of Grover
Image courtesy of www.vpnsrus.com