What about a robot that thinks and feels like a human?
What about a robot that thinks and feels like a human?
This... And as I said in the other thread. Humans are nothing more than evolutionary programmed biological machines, for the most part, up to and including feelings, it is all the good things at the top of the intelligence scale that makes us more than mere animals/machines.
If we can extend "rights" to people who are basically dumber than snot, to be nice and err on the side of caution morally, certainly we can extend that to human level intelligent machines.
PS: Though I do believe that it will be exceedingly hard, if not impossible, to create an electronic human level intelligence, in eg. skullsize, it might be possible in supercomputer size. For skullsize... I think it is much more likely that we will create artificial biological humans / robots first. Which makes this debate possibly even more important... Ie. manufactured "human" (tanks for those who watched space above and beyond) vs. born humans.
Not read everyone posts so if this has been said before sorry,
The problem with androids is that they will not have the base needs that a human will have, with that would come a completely different set of needs. which will affect there thinking on a level. add that with the singularity advent that would come from them being free to do as they please they might end up being the greatest undoing of what is left of humanity. The would be superior to humans in the quickness of thought, reflex, endurance, strength, learning. and senses. they would need to be limitations.
Edition: Also the fact it would take them hours/days to make one android as to humans 9 months for a baby, which would be in now shape to help the human race in any real way bar numbers till its at lest maby the age of 9-10(?) the androids would come of the line fully able.
Do not twist what I said to be relevant to mankind as a race, they would not be "Human" they is no way for them to be "Human" they would be "Android" they is no need for them to take are shape, just we we gave them such so they are easy on the eye and make us feel more relaxed around them. they could have ten arms if they wanted, why limit to two? that is human intervention there. every one that comes of the line can be what ever it wants, even upgraded later if need be, humans can not get past a set limit due to biology, with out messing around with are own DNA we are stuck as we are for the most part. (Randomness of genetics aside) Androids would not be as limited as we are, they could live in more areas of the world then we can, they have no need to breath/eat/sleep they would not be botherd by the cold like we would nor heat for that matter.
Based on your own logic Dolphins which are very smart and apes are as well, should we consider them to be human-like as well? we don't they are animals.
And would you own an android any more than you would own any human follower, that is outside of Clover who you actually bought.
Add game-play imitations, Lydia would not suddely leave you and find she would study magic instead, as it would be annoying.
What's interesting is that androids are not alive, because they don't meet the requirements for life (can't reproduce, etc.). Still they are conscious, still they deserve the same rights as humans. And that's where the problems begin.
Synths do not age. It's safe to assume that they could theoretically (unless they have Cortana-like built in 'life'span-limitations or something) live forever, given proper maintainment.
Synths are immune to sickness.
Synths theoretically surpass humans at least at some fields of work.
Synths can be programmed.
The differences between humanity and synthkind are immense, the similarities are even greater. However, the differences will inevitably lead to clashes. It's impossible to smoothly integrate another sapient being into civilization when the differences are as glaring.
To be fair I am getting a off topic. To answer the question I don't think it would be "Unethical". We own Dogs/Cats/Livestock. Androids to me are just thinking machines. like a supercomputer. so to me no,
Androids do not have the same metal capacity as a human, they are would be like comparing a drop of water and the sea. they would have intent recall, we dont, they would never forget, they would recognize a persons face fair better then we can. to say we have the same capacity is wrong, they would learn instantly from reading a book, we would need to study proper, if we read a book on some sort of marshal arts we would have to spend years to learn it, they could learn everything just by flicking though it. we would spend years getting the form right, them instant. they computer brain would be better, but it would think logicically, the only thing a human would have is using unconventional thinking to outsmart them in some manner.
But by the law we do own pets, if a dog attacks some one we don't go "oh well, its the dogs fault" the human owner does get fined due to it is its property
Says you. Harkness was pretty human in his motivations to my eyes.
Also, as an aside to the 'THEY'RE BETTER THAN US THEY'LL TAKE OVER!' why? Underlying human need to spread and take over things is our need to procreate. We need resources and space to procreate. An immortal species has no need to procreate. They just need to facilitate their own existence. I imagine conflict/death would seem especially unappealing to them.
In the real world, no. But we are talking about the fictional Fallout universe, where what is possible is determined by the writers. I will probably treat the androids in the game as if they were sentient and had rights because that's how Harkness was presented in Fallout 3 and how it looks like androids will be presented in Fallout 4. I'm accepting the world as it's presented.
It would be cool, though, if Bethesda pulled a Tenpenny Tower on us. Present the androids and sympathetic and oppressed so the player helps them then they turn around and start slaughtering all the humans ...
Then there's no discussion. Unfortunately, you can't prove your point. Neither can anyone prove the opposite.
It's safe to assume that synths think and feel like humans though. The following points exemplify this:
1. True, self-conscious AI is pre-war technology. To think that an advanced post-war faction (compare to Curling FEV and FEV-serum, the most sophisticated post-war inventions by another tech-heavy faction) couldn't further develop the ZAX prerequisites into a human-sized synthetic brain is delusional.
2. The artificial human is modeled after the born human, so is their brain. Btw, synths are (at least the more recent models) organic.
3. Harkness has motivation. If this is just an illusion, then how's your own ego not either?
4. I am an android. And believe me, I am a person.
I think it comes down to what you believe a 'human' is. If you think that a human being is a sum of his/her biological parts that can think and feel, than a machine who is a sum of its mechanical parts who can think and feel is not really any different.
Except the Tenpenny Tower situation had red flags all over it that Roy was a psychopath. Having them come out of the blue with "Kill All Humans" would be hamfisted.
What are we, but molecular machines? How many times must this be trotted out? Why do those against think there's a clear sharp line delineating things?