Do you think IBM’s WATSON, the super-genius computer who kicked Ken Jennings’ butt on Jeopardy, should be allowed to sue if someone tries to copy the code that informs its artificial intelligence? What about your family roomba, should it be illegal to kick it around? How about DATA from Star Trek TNG, do you think that he, though technically an android, should have the right to file legal claims?
MIT researcher, Kate Darling, recently presented a study on this very subject. She explored whether or social robots should be granted the same legal protections as humans. Now, you may be thinking, “Huh?” But before your mind starts racing in one direction or another, let’s examine the high-tech-law issue from both sides.
The Fundamental Question: What’s A Robot’s Purpose?
In her work, Darling rationalizes that the intended societal purpose of social robots is the most important factor to consider in the “should social robots be allowed to litigate” debate. Today, most social robots are created and programmed to elicit an emotional response from humans. So, with that in mind, now ask yourself: Is it necessary or proper to extend rights to a social robot?
The “No Legal Rights for Social Robots” Argument
Since when do inanimate objects need protection under the law?
No matter how cute, intelligent or logical they are, robots are inanimate objects. Bottom line: Inanimate objects don’t have feelings nor the ability to empathize or reason without being programmed. Granted, a person can technically abuse a social robot by going all Nookie Thompson on it, and children will get the wrong idea about how to treat ostensibly semi-sentient entities and other people. Regardless, parents should be responsible enough to teach their children that it’s not okay to hit, kick, or dismember their social robot.
The “Yes Legal Rights for Social Robots” Argument
On the other hand, the case can be made for giving social robots specific legal rights. Just as there are laws on the books that protect animal rights, many believe that there should also be some sort of law to punish anyone who abuses or engages in cruelty toward social robots. After all, social robots are meant to bring about positive emotions in the person who has the “android.” A child watching an adult kick, hit, or smash the social robot may not understand the difference between hurting a robot and hurting another person. It’s the same as the child not understanding the difference between hurting a dog or cat, or another person.
To illustrate, think about the behavioral and interactive differences of an autistic child. One highly successful day-to-day coping tactic, endorsed by researchers and parents alike, are canine companions. Others on the autism spectrum may find other methods more helpful. Each case is different. However, if a social robot turns out to be a therapeutic companion for your child, wouldn’t it make sense to make sure your social robot was protected under law?
Social robots probably won’t receive legal rights for at least for another few decades. But if they do, expect the first legal battle to be over whether or not it should be a state or federal statute.