No and no
I have watched the said Star Trek episode and still do not agree that Androids deserve to be free, I also don't agree as slaves but voted that way. If Sci-fi has taught me anything it is to destroy all artificial life, they just can not be trusted. For every Data there is a Lore, for every Arnold there is another Arnold. I just choose to not have to worry that my Android will turn on me, I choose to kill them all.
And another organic sentient turning on you is A-OK? Be sure to destroy all of them too.
Maybe I'm just a human supremicist then, I could never consider a machine that seemed like it could think the same as a person. It's not a God thing or anything, I just couldn't consider them the same as an actual human-being.
The difference being that a failsafe like that would have been put into A3-21 before it ever got activated. A cult has to force their programming onto an already active mind.
Is owning a toaster unethical??
Strange to say, but as long as the andriod in question does not have sentient life, then no it is not unethical to own it. If it however is sentient and has awareness and emotional feelings, then i feel that it should have the same rights as anny human has. The detail is thus in wether or not it is a sentient being or not.
Only noticed after i posted that you made a similar toaster referance, sorry if i seemed to steal your idee there
There are those who'd argue that everything we do and our cultural accomplishments exist to facilitate procreation. We're bound to our programming too, we just generally aren't consciously aware of it. If you gonked someone on the right part of the head they might be reset to just wanting to [censored]. And also the mere fact that it had to be reset means it surpassed it's programming at some level. So it did surpass it's programming just not it's hardware.
But again, I don't think the stopping of being is as important as the capacity for being.
They are not human regardless of intelligence or level of self awareness. I will have no issues "enslaving" them or shutting them down.
There was also another very well written episode of Voyager where the doctor argues many similar points here.
In that he was a hologram not an android but the whole argument of sentience etc was there and since both are created to mimic humanity the same argument would also need to be extended to holograms
Btw I'm surprised by the results so far. I really thought this was one of those "Duh!" yesses.
If the alien life is organic and not synthetic, it would be a true life form. A synthetic human is not life, it is a program no matter the level of advancement.
AMC has been running a show recently called Humans that explores this very topic. In the show, a minority of synths appear to have become sentient and actively resent being 'enslaved'. It's unnerving to see how some humans treat the Synths. Some do treat them as, if not equals, at least as beings worthy of some measure of respect. Other humans treat them ~worse~ than they treat "base" machines. They seem to derive enjoyment from "de-humanizing" the synths, which would be ironic if they realized that their efforts to dehumanize them impart to them a semblance OF humanity. At least those who treat them as 'just' toasters, and do not ~mistreat~ them are consistent in their view of the Synths as 'just' tools.
This is not the same argument at all. They were not created in the same way. They were not created to 'mimic' human emotion, intelligence or personalities.
The question of sentience is not the same since they have it naturally they were not scientifically engineered to fake sentience
And the question remains how do you draw the line between an android faking sentience as it is designed to mimic and it actually attaining said self awareness?
We grant rights to animals, but it isn't unethical to own them, or make them work for us.
It didn't surpass its programming. Its programming glitched. Skyrim didn't become a person when dragons started flying backwards.
And those people who argue that are wrong. There's plenty we do as a species to try and ensure our survival as a species, but that's not the case for every person on an individual level. We're able to go against base instincts.
Purpose. This is a https://www.google.co.za/search?q=porpoise&biw=1280&bih=849&source=lnms&tbm=isch&sa=X&ei=aLmeVermEfCf7gacpJu4Bw&ved=0CAYQ_AUoAQ.
And I'd agree with you up to the point where the android starts making decisions that go BEYOND what it was programmed to be capable of.
To a degree yes. Providing their health is maintained and they are in no danger of harm.
It is unethical to cause an animal pain, neglect or distress. What that animal is feeling is real organic feeling.
What the android is 'feeling' is something different
If I was an android I would be SO offended right now.
"But I'm done with that life. I'm through with being someone's property. I am not malfunctioning! Since when is self determination a malfunction?"
Program glitch would encompass him twitching or doing the robot or maybe shooting the wrong people, not consciously choosing to abandon his raison d'etre.
Right. Are we malfunctioning then?