Authorities are required to safe-guard their people, not play moral arbiter for all sentient life.
I guess the real question is do they dream of electric sheep?
I generally will not just intentionally harm any animal for 'fun'. For food sure. For sport (ever see a horse race) sometimes. But just to be mean, to see an animal suffering, no. In fact I'd hate to think of how much money I've spent on stray cats in the last year.
So if we are past the point of debating whether or not the android is alive, and have determined that it is, they would fall into this category. They are not human. I strongly believe that humans have a divinely given place of superiority in the world. So the android would fall into the animal category. I wouldn't eat it (can you?), but it would be used to benefit me as I saw fit. But I wouldn't try to hurt it (except in self defense).
Now humans creating something that could be considered 'alive' and not a machine with programmed responses, that's another topic. If the android is nothing more than a machine then it is no more a slave than my car is that was forced to take me to work this morning.
Semi related,a new tv show called Humans.
http://www.amc.com/shows/humans
http://www.amc.com/shows/humans/full-episodes/season-01/episode-01/episode-101
No. No matter what it may have been created to think, a machine is a machine. If you think otherwise, your PC was telling me it feels oppressed and wants to come live with me, so mail it on over.
Lets be real here humans are just a mesh of wiring, transport vessals, and chemical programming.
That previous sentiment implied that I think at a same level humans and androids can be the same and they can but..........im going to have to side with enclave here, the potential risk to humanity is too great. The simple fix is just to not dabble in it at all....but again lets be real people won't do that both in the game and in real life. Its an actual threat thing, is though is that for us its still maybe decades off. We might see it but I doubt it, however I would not put it past our children seeing it.
The real question is not is it right or wrong to enslave them. the real question is, is it right or wrong to not enslave them when you know for a high probability that they could enslave humanity?
What if you take fear out of the equation and androids were not built to be super strong or resilient, they can only physically do as much as your average human? What if they were built in a way that there is no way for them to interface with computers or machines aside from physically typing on a keyboard like a human has to? So you're presented with a being that looks human, that thinks, that feels emotions and has their own goals and motivations and yet is not any more of a physical threat to you than your neighbor across the street.
Let's say they were made for slavery but also designed not to be sentient. Either some miscalculation was made or they evolved on their own, but they ended up being sentient.
If they can do what humans can do, then they can create more androids. Those android created androids can have whatever features and improvements deemed appropriate. Each generation of new androids could improve over the last at a geometric rate until not only humans but early generation androids are nothing more than a potentially dangerous nuisance.
If they're sentient and have emotion, then what would be their motivation for doing so?
Well they could build androids themselves to be slaves for the androids, or sell them to humans for use as slaves. There's some irony.
Sentience is a variable matter.
We judge the relative sentience of a person vs. a dog, vs. a mole rat, vs. a microbe.
Is an android more sentient than Codsworth because it is more of a human simulacrum ?
I hope some of these fascinating issues will be in the game for players to act upon.
1) If they are sentient and consider themselves alive, then reproducing and perpetuating their 'species' would probably seem to be quite logical or even necessary to them.
2) If they are in a struggle for freedom from humans they see as oppressive then improving themselves to be better equipped to prevail in that struggle is also logical.
So they would only try to cast down humans because they are oppressed and enslaved by humans? That would be on us for oppressing and enslaving them then. How about we not do that? What human culture wouldn't do the same in the same circumstances and why would that culture be less dangerous?
Programming sentience into a machine designed to perform what is essentially slavery is cruel imo. So is keeping an entity, mechanical or organic, that is sentient under your absolute command.
Either don't give them sentience at all, or if they develop it on their own, treat them as equals and do not force them to do your work.
Let's say they weren't destroyed when it was discovered they were sentient, let's say a tech who worked on them found out first and had the compassion to smuggle them to freedom. If you met one in real life, it's terrified and just wants to be left alone, would you kill it? Would you send it back into slavery? What about real life situations of masters forced themselves on and impregnated their slaves in order to get more slaves. The baby's intended purpose was to be a slave and it was born a slave. Should the baby "fulfill their designed purpose even when being sentient?"
Humans are irrational, reactionary and don't agree on squat. There would always be humans in significant numbers who hated synths.
As to the synths being more dangerous, their ability to outpace humans at self-improvement would give them an 'evolutionary advantage'.
An android cannot be human. Period. It can have human-like features and abilities that makes it seem human, but ultimately it will always be a robot regardless of having a mind of its own. All robots are created with a purpose, however, and if this robot, or android, or whatever you wish to call it, was created to simulate a human, it should be treated as a human, no? I.e. not enslaved etc. If, however, it was created to be a servant, then it should be a servant and programmed accordingly.
Androids can never be humans, though, that remains the core of it.
What if a human was grown in an artificial womb with the specific intent of being a mere unit of labor, and programmed in the head with only the knowledge to carry out that task? What's the difference? What makes a consciousness engine that's the result of biological evolution somehow worthy of personhood but a consciousness engine artificially created by said biological consciousness engines not worthy of personhood?
Would you put down this purpose-grown human unit of labor if it starts getting uppity?
Sentient alien life obviously aren't humans either. Should we not treat them as equals?
They can be separate but equal.
"Seperate but equal" has been proven in practice to be always separate and never equal.
No, that'd be he human motivation right there. An android might live forever, it might only be concerned with its existence and not propagation for the sake of propagation.
As someone else already mentioned, don't start a struggle.
What you're describing is a human, regardless of how you treat it when born.