San Francisco is a wonderful place – I ended up having an interesting discussion last night with a self-styled “geek-at-large” about robots, ethics and unix while hanging out at a not-for-profit who are improving the world one wi-fi network at a time.
The robot maker’s argument was that robots don’t have to be super intelligent to carry out some pretty useful, robotic, tasks. He was suggesting that there’s a blind-alley in robotic research premised on being able to map and track everything in your environment in order to move and interact effectively. His argument was that instead you just needed to be able to identify a goal to move towards and be able to avoid collisions. That is a much simpler goal than understanding your whole environment.
This leads to a different attitude to creating robots: have multiple robots, each good at doing one thing and then find a way to co-ordinate them. This is very much the Unix way as opposed to the Windows way.
From a feasibility and an affordability point of view this simplistic approach is much more achievable than the all singing all dancing uber computer often depicted in futuristic films. But his favored (and more philosophical) argument for this simplistic approach was that it would verge on the unethical to build a quasi-sentient being and then just ask it to fold your laundry every day. This is the Marvin-esque argument that the more human-like you make a robot the more you have to start affording it human-like rights.
Despite being interesting this line of argument instinctively felt wrong to most of us. Someone suggested the comparison with dogs, but it soon became clear that we feel animals have some level of rights too, whereas you don’t expect your computer or phone to complain when you keep asking it to check your email every few minutes. While there is clearly a quantitative difference between the rights of a human and a dog, there seems to be a qualitative difference between those of an animal and the rights we’d afford to a robot (however intelligent).
So I interjected and asked the simple question:
Does my hand have rights?
I ask my hand and my feet to do things all day (just like my computer) and I don’t expect them to complain or tell me they’re busy. This seemed to clarify the distinction for me that I’d been struggling to find.
The difference in my mind is that humans and other animals have their own goals that they are responsible for selecting (within some nature and nurture constrained boundaries). Robots (and hands and feet) on the other hand (API) are mere tools and don’t have separate goals or aspirations. This lack of goals and aspirations is the key differentiating factor that makes me feel comfortable not giving a robot freedom of speech / assembly / religion etc. I retain the absolute right to reformat my computer’s hard drive or swap it’s RAM without feeling any pangs of guilt.