I feel like emotionally I wouldn't care either way, but I guess it would be best otherwise they could start demonstrating against us and I'd rather not it turn out like one of those movies where robots take over the world heh. It would probably become a pretty big deal people would debate over if there were many of them, and there are more important issues to focus on in the world. There also seems to be bigger cons than pros to them not having the same human rights -- or at least *some* type of rights. In order for them to be the best they can be and fulfill whatever purpose they have, it would be helpful if they were treated with respect (otherwise I could imagine them becoming resistant, for example), so I guess I'd be for them having the same rights as us humans.
Hello, there @quove I'm one of those people who gives the no. 2 response. Which is to say they would have no autonomous will apart from their human creators, making them not much different from any other thing we manufacture. While it is true that we are in large part programmed ourselves, the 'who'/'what' that did our programming is certainly not another human! These robots would essentially be some advanced TV or phone or fridge that someone somewhere on this planet makes. Each of its choices could be boiled down to the decision of its programmer. So in what sense would they really be like us? None of us can say that we control what any other human decides in any direct sense. But that is what we would have with the robots. ..our own toys. That we'll have made them to look, sound, act like us, rather than drive/fly/cook for us etc, will not really alter the fact that we would still have made them.
You might say that we ourselves lack an autonomous will....perhaps. We don't really know that and there is no science that definitively establishes that our will is absolutely pre-established by DNA. But even granting that, whoever/whatever programmed us is certainly beyond us, so that even if we say we are not ultimately absolutely free, we can still say that we are free vis-a-vis each other. Essentially, none of us is another's master, and we have made this the foundation of our entire individual rights infrastructure: that we can claim to be equals. To add our creations to this structure would mean creating a new basis for rights. That may sound easy but I don't believe it is or that it would be desirable. Humans have shown proclivity for a whole lot of arbitrariness when it comes to establishing bases for equality and rights. Maybe it is possible to build a good structure with something other than our naturally parallel horizontal relationships as it's foundation, but I have my doubts!
Now if you were to discuss the rights of aliens, on the other hand, I would have a significantly different response!
Lol, I guess for me it is just a lack of belief in the idea that humans can actually create consciousness. I'm just wondering how we would determine the existence of consciousness as opposed to a highly sophisticated program that someone made? I do agree that if it is indeed possible for us to create beings that actually have the same awareness and self-determination capacity/potential that we have, we would have to recognise them as possessing fundamentally the same rights. This is why I would support the rights of aliens who display the same capacities we do. But if not, it is no simplication to call a highly sophisticated program an advanced human technology. The point being that as sophisticated as it may be, it is never truly autonomous.
Is consciousness something that has simply evolved out of some programming? I personally do not think that is anything more than an assumption. But even if consciousness is prior, I suppose there would be nothing technically impossible about such a thing merging with a robot in the same manner as it does with human bodies, though
I don't think tinkering with the basis for rights is something that can be answered with let's just try and see. For example, a retarded child may not show much capacity for abstraction or any number of things that we may use to determine high intelligence or the awareness of ourselves as self-determining principals, especially. This child is now protected on the simple basis that he is human. His potential and capacity for personhood, indeed the fact that he is a person, is taken for granted. If we were to change this basis, and decide that human nature and our presumption of personhood as an innate quality of every instance of it, is not the basis for rights, I don't see how one is then able to argue against eugenics or some forms of slavery for the "simpler" among us, be they so from birth or by accident or by age. I suppose one could have more than just one basis for rights, say human nature and proof of consciousness. But then again, if we are truly equal, why must some of us have to prove it while for others it is presumed apriori? If in the scenario you are thinking of, a race of reproducing robots independently arose out of our initial programming, that would not be a problem. We would just have our nature and theirs as the bases. In that case, I would see them the same way I see a race of aliens we discover somewhere else, or who come to earth.
Assuming these robots are like humans in virtually every sense, other than the composition of their bodies? No.
Let me be clear. They ought to be given some basic rights that biological humans have (such as the right to life or a set of property rights), since to give them no rights at all would open our societies up to political revolt. Even if the machines didn't self-organize, it'd be too much to hope that some ambitious human wouldn't seek to agitate them for his own purposes, or even simply because he'd believe in the cause of robot rights. Slave societies rightly feared the revolt of their laborers, and a permanent caste of intelligent, self-aware machine laborers isolated for the mere fact of their birth and bought and sold by human masters would effectively be the same thing, even if one argues that the two situations aren't morally equivalent. We could attempt to solve this issue by isolating these robots from human society entirely, preventing them from accessing human philosophical literature detailing the concept of rights, or from seeing the freedoms granted to humans but denied them, but it seems to be too much effort to police the machines and keep them separate from us through these and other means (population control being one), with too little attached gain (after all, we can simply create dumb machines to use as slave labor, which we're doing now). I think it'd be simpler to give them a basic set of guarantees approximating natural rights and call it a day. Whether they actually have natural rights is a very different question from whether they ought to be treated as if they do from a practical standpoint. I'm also not sure that the fact that their circuitry is artificial means that they are subject to a different set of rights than we are; what would this mean for cyborgs?
However, some privileges should be assigned with caution. For instance, androids should not simply be given voting rights by virtue of possessing adult human bodies, since they will not have the intellectual capacity to cast an informed vote from birth (unless we preinstall particular bits of information in them, but this leads us into questions of propaganda and the inordinate power that would give robot manufacturers over the political process). Even though they would have the bodies of biological adults, they obviously wouldn't have minds capable of exercising that privilege in ways that wouldn't be unduly harmful to the wider society. The same would go for gun rights, or anything else which we typically associate with an adult level of responsibility.
Legally speaking, androids ought to be treated as children until they've acquired the wisdom necessary to navigate society as conscious adults. How this would be carried out is difficult to say. Should androids be placed in socializing programs with regular humans until they learn how to deal with us? It seems dangerous to leave them with children, who would be vastly physically weaker than androids built to emulate the physical capacities of adult humans, and who often lack the judgment to refrain from provocative behavior that can cause the android to retaliate. What happens if, for instance, a group of children taunt an android, who then lashes out and strangles one or crushes their skulls? Will the instructor necessarily have the strength to stop a raging android? How about two, three, or more? Would we need to fill elementary schools with armed men capable of doing so? Of course, placing them with adults leads to a different set of problems. These sessions would probably be separate from the ordinary adult world (imagine an ignorant android in your average workplace!). Would they adequately prepare the machine for the challenges of adult life? Does a sandbox with particular adults selected for the task of socialization have what it takes to prepare machines for the real world? Keep in mind that, the more we separate these machines from humans of their mental age, the more difficult it will be for them to adapt when let out on their own. Worse, what happens if the humans tasked with socializing the robots decide to manipulate them for their own purposes? For instance, an institution could take money from particular organizations to indoctrinate these machines with ideas suited to their ends, ranging from the relatively benign (purchasing a product) to the more sinister (voting for a particular candidate or party in an election), to the outright destructive (killing all members of a particular ethnicity). If their only exposure to humans is within these organizations, that does open up dangerous possibilities for brainwashing and control (though they can be mitigated somewhat). Should we then place them with human families tasked with raising them? It's not clear that the machine would be able to develop a sense of familial loyalty, and if they're simply isolated in autonomous family units without mixing with other children, this weakens our ability to socialize them. However, having them mix with human children brings us back to the problem with placing them in elementary schools. But, this is a digression; the logistical problem of integration is not the same as the philosophical problem of rights or the political problem of privilege. It is important to consider, though, since it will affect which privileges we can meaningfully grant them, and the consequences likely to emerge when the framework finally becomes operative.
The point is that, aside from basic rights which would likely lead to revolt if not protected, privileges should be distributed on the principle of responsibility. An android with the physical and emotional capacity of an adult human and the intellectual sophistication of a newborn ought not to be treated as an adult human by the wider society. Such reasoning is dangerous both to biological humans and to the androids themselves.
What about artificial aging? Robots, unlike humans, can have modular and easily disassembled bodies, which opens up the possibility of creating soft, plushy "baby bot" bodies where the Consciousness Chip (tm) is placed until it can be trusted with a strong and pointy adult body.
I think any physical issues can be worked around, human engineers are pretty brilliant. I would probably say that creating sapient AI is unethical in the first place, not because the creation of life is inherently evil or anything, but that humans will never be capable of the processing required to make a new code-based lifeform while doing the resulting creature or person justice. Imagine if every thought, process, emotion, nerve, and bone of your body had to be consciously created by a human? Our human perceptions and consciousnesses are so limited that we will invariably make mistake after mistake, with a single typo rendering entire portions of the android vegetative and possibly excruciatingly painful.
Even if we managed to code a coding device that would autonomously check these things without creating painful mistake-life in the first place, are humans capable of creating and managing an entirely new race that will have its own needs, desires, and sense of ethics? I really don't think so. And even if we could, how would it benefit us to manufacture a creature that will invariably turn around and expect resources and rights from us? Why would we create a machine that could feel emotional pain, if we would have to invest more of our resources and time to making it happy?
I think a robot that would benefit us as a society and warrant creating will inherently have a different consciousness from us. If you could program its self-actualization to be centered around completing a task, why wouldn't you? Why even program aspirations and dreams and the potential for disappointment in first place, and we did, why program it to be unable to withstand the pain and take an unproductive course of action to prevent it, as humans do?
Theoretically a fully "human" android, once achieved, would deserve rights and a system dedicated to its well-being. Given that for whatever reason we end up in that position, sure. We would need to figure out a new protocol for handling them, possibly very similar to Mystery's suggestion. But I get tripped up on the ethics of the processes that got us to that point, and the ethics of taking resources away from humans that already exist to provide for artificial humans that didn't need to be programmed with those needs. Not to mention the dubious rationality behind creating a creature entirely human except for its powerful, dangerous body and its inability to feel torturous negative emotions like regret.
To sum up, I think that the only reason to create androids that are completely human is just because we can -- and if it's unethical to breed a dog or have a child for that reason, I don't imagine this would be any different.
Last Edit: May 31, 2016 8:05:51 GMT -5 by Zweilous