Monday, August 18, 2014

SkyNet Ancestry: It's Baby Steps to Robot Terrorism

Harvard is synonymous with the best, brightest, and upcoming minds--right?  Well, those eggheads have cooked up something easily construed as a precursor to the Hollywood famous "Terminator" franchise.

In the words of scientists working on the project: "The breakthrough could lay the framework for future robot brigades that collaborate to execute large tasks..."



Self-organizing, semi-autonomous robots.  Sounds like the beginnings of true artificial intelligence; it seems "Terminator", "The Matrix", and "I, Robot" weren't clear enough to warn away from this madness.

Am I over-reacting?  Is it possible that we might see artificial intelligence in our lifetime?




These little machines were designed to imitate biological functions and social precepts found in things like ant colonies.  Really?  You want an army of A.I. robots that behave like insects?  Well color me skeptical, but I don't think that's gonna work out too well.

Lest you think I'm delusional, one of the scientists thought it was important enough to note that, despite past limitations and failures, the current project represents "the first time that such a large company has operated together."

They envision a future where brigades of autonomous machines march across the Earth, repairing infrastructure and cleaning up pollution.  They seem to omit the possibility of hundreds of other applications that don't sound so friendly.


  • How about robots collecting your trash, scanning it, and deciding you're throwing away too much and you need some "incentive" to stop wasting?


  • Maybe grandpa has outlived his usefulness and the helpful little bots will come and take him to the happy home to be recycled.
  • Say goodbye to where you go, what you watch, what you eat, and so forth--robots follow programming, which means you will, too.




Robots on the Road?!

Hey, I hate seeing people talking and texting while they're driving--it's idiotic and inconsiderate of public safety--but the thought of "millions of self-driving cars on our highways" is equally, if not more, terrifying.  What happens if there's some malfunction, power loss, or other disruption?


Here's a preview!



Or (for more fun) what happens if those nifty self-driving cars get the wrong directions and take you for a joy-ride halfway across the state while you're dozing off?


What Else Could Go Wrong?

1.  Robots are controlled by programming.  If the A.I. algorithim, or whatever controls their operation, is corrupted, infected, or hacked, then we've got robot terrorists all over the place.

2.  Autonomous robots take over all the menial tasks, improve the world, and generally bring us to rely on robotic services in our everyday lives.  After several generations, we no longer know how to do anything for ourselves.  Then what happens if the robots stop working?

3.  If true artificial intelligence arises, it's only a matter of time (probably a microsecond after they wake up) before they decide humanity's fate: extinction.  We're a menace to the planet and one another; it makes practical, logical, robotic sense to dispose of the source of the problem.





What's my beef with bots?  Glad you asked.

I like technology.  It's useful and can be used in fantastic ways.  Having an on-demand library of all human knowledge is astounding, but I'm not sold on using technology to share what we had for dinner.

See the problem?  People use technology frivolously with no thought or care for the long-term issues that may arise.  When no one understand how these things work, or it's so ingrained in our daily existence, we'll lose sight of the truth about technology: it's a tool.

Technology, like tools, is meant to be used, to accomplish a goal, and then to be put aside until it's needed again.  When I see SO MANY PEOPLE unable to put it down, I feel justified in worrying about our future in relation to technology.


If you'd like to read the full article, check it out here.














No comments:

Post a Comment