True.
My point is, all forms of existence rather organic or mechanical that exhibit signs of self awareness or sentience should be treated equally. Whether they are human or not doesn't matter.
True.
My point is, all forms of existence rather organic or mechanical that exhibit signs of self awareness or sentience should be treated equally. Whether they are human or not doesn't matter.
Yeah, because we can turn on the news and see how well that sage advice works among humans dealing with each other.
A byproduct that was the only logical conclusion of the level of intelligence they created it with.
That figures back into the whole how do you prove you're self-aware argument. We can't really prove Cerberus is self-aware, nor can we prove that A3-21's programming isn't just glitching out. But people are more willing to give A3-21 the benefit of the doubt because it looks human, making it easier for them to project humanity onto it.
You're right! Humans are the problem! We should kill them all! Seriously though, just cuz we have a tendency to start conflict doesn't mean it's inevitable, especially in this case where you can avoid the initial conflict by just not enslaving something. It would literally be that simple.
Why not? It's wrong to enslave a person because ~sentience~ but enslaving something ELSE with sentience but is NOT human is not objectionable? How does that work?
EDIT:
Not really. Harkness repeatedly mentions self-awareness and wanting to be a 'person' as his motive and then takes a number of actions and decisions to actualize it. Cerberus does not. He isn't looking for loops in his code or cobbling together something to uninhibit him he just [censored]es passive-aggressively. It's just twisting in the wind of it's own programming. Probably.
That's like saying the slaves that aren't actively trying to escape don't deserve to not be slaves.
Is Clover not a person because she can't break free of whatever Eulogy Jones did to her to make her unable to seek her own destiny?
Humans have survived for at least hundreds of millenia doing our own work. There is no reason to keep a machine that shows signs of self-awareness under your control other than the desire to make someone or something else to do your work, IMHO. Enslavement, control, whichever term you prefer, of androids isn't necessary. It wouldn't lead to the extinction of human life. And if it does, it's because humans simply didn't work hard enough to ensure their own existence. Millions of species have gone extinct, we would join them.
We live. Survival is a part of life. Laziness isn't conducive to survival.
First off, what's a soul? Can you show me a soul? Can you define it?
Second, we're talking ethics. The right of might isn't ethical. Just because you can do something doesn't mean you should. Ask the native Americans. Ask Africans. Ask the Jewish people of Germany.
If androids were to pursue that goal, then we'll deal with it. Innocent until proven guilty.
Actually, this thread belongs to a philosophical forum. This matter is a matter that would definitely be discussed in the future, when AIs are already mind-staggeringly advanced.
Although synthetic they are, androids with advanced AI is sentient, is able to reason, and is able to have emotions if they are sufficiently advanced to emulate them.
Does that make them human?
Obviously, no. They are synthetic beings. Literally speaking, they have no souls. If they are shut down, they could be rebooted again, although the experience might be strange to them, or maybe even painful, and we don't know it yet. If humans or other organic beings died, they can't be resurrected from being dead (cardiac arrest is not equivalent to true death, mind you). They are made to look human, but they are not human...
Does this justify that we could treat them as a tool? We don't know. It is simply unknowable. Even though they are literally soulless, it doesn't mean that they are ordinary robots that could be expended. Even though they could think and feel and have emotions, it doesn't mean that they are truly... human.
The matter of AI is still unclear to our present society. Simply to be put, we don't know yet. Are they to be bestowed with the same rights? Are they to be exploited?
We will not know until the time comes.
However, it is clear that we could form bonds with them. Hell, we even sympathise, empathise, or even feel a strong bonds with things that doesn't exist - take Fallout for example. Is it really happening? No. Is California ruled by the NCR right now? No. Do we have miniaturised fission reactor now? Not yet. Yet, we feel that they really exist, that we sympathise and empathise with the characters in it, the setting, the story.
Given that androids exist (although in a very primitive state, and barely sapient), doesn't it mean that someday, we could form bonds with them? Yes.
What kinds of bonds? It's up to you. You could enslave them, you could befriend them, you could ignore them, you could "kill" them, you even could fall in love with them. (Yes, this is the fact. People can fall in love with objects.)
Basically, androids are people, but they are not human. They are synthetic beings. Are they deserving of your sympathy or your scorn? It's up to you entirely, like how you treat fellow humans.
What is a soul? Are you religious? Cuz we can probably save some time if I knew whether you were religious.
No, it's like saying my PC doesn't want emancipation. It doesn't even have the capacity to think in those terms. A Mr Gutsy is not an AI. If this was ME I'm 98% Mr Gutsy's and other military bots would be VIs, complicated simulacra but there's nobody home.
When I play Fallout 4, androids will be enslaved and or destroyed, and the Railroad too. They are machines plain and simple. Any arguments to say otherwise just reeks of politically correct bs to make oneself feel morally superior.
They are machines. But they are also self-aware.
Don't lie to yourself. Do what thou wilt, as long as you realize the truth.
And yet again the "No because they don't have a soul" argument is brought up. What is a soul? By the much more rational definition I postulated http://www.gamesas.com/topic/1526502-is-ownership-of-advanced-androids-unethical/page-3#entry24147890, they can have one.
A soul, from what I've seen, is a justification people use to harm or oppress whatever they don't understand or see as being like them. See: women, black people, homosixuals, etc.
I guarantee if we ever find alien life people will justify killing or enslaving them since "they don't have souls."
Yes, we truly don't know what soul is.
Actually, do someone other than me could have soul? Do I really have a soul? Do souls exist? Is the world only stimuli picked up by my senses? Am I even existing?
That is the question that we will not know the answer until we die. Even death could be questioned if you would.
Is death truly the end? Does life truly end in eternal oblivion? Do we reincarnate? Is there any afterlife?
We will not know those answers until we die. Will we die? Yes. Every living thing must die.
I said it before and I'll say it again.
The only thing I'm absoutely sure of is that I'm sapient. As far as I'll ever be able to determine, everyone else around me could be sapient, or could just be a bunch of dumb-dumbs that make a good show at being sapient. If an android displays a similar good show at being sapient, I logically MUST accept it at being sapient if I accept that everyone else is sapient. This is for the sole reason that I fundamentally will not be able to determine weather or not the sapience that the android shows is genuine sapience or a good simulation. And, of course, if I accept that said android is sapient, it's unethical to own one because it's unethical to own a sapient being.
Unless someone can scientifically prove a difference between true and simulated sapience and explain how the latter is not "real" while the former is, there is no appreciable difference between the two. Therefore, unless it's blatantly obvious that the android is not sapient*, it's going to be impossible to determine weather or not it is. And ethically, it's better to assume that it is. I'd rather be proven that my android buddy isn't actually sapient than to be proven that my android slave is actually sapient.
*This cannot be quantified in the same way the https://en.wikipedia.org/wiki/Uncanny_valley cannot be quantified. We will know that it's good at mimicing sapience due to a bunch of little things that we'd be hard-pressed to describe them as more than just "not quite right". We know Furby is not sapient, and we know a human is. There's going to be a line where a machine can ACT sapient, but we will still know it's not because its "sapience" is... not quite right.
But clearly we see a Mr. Gutsy that doesn't want to be working for a bastard/saint, and another that expresses a desire to be free from its combat inhibitor. I think that speaks to them possessing some level of intelligence beyond base programming.
Keep in mind, all Mr. Gutsies have combat inhibitors, so it's not like Cerberus is just recognizing something foreign preventing him from following through his function.
EXACTLY.
One thing we can learn from the theory of solipsism is that we must have a respect for anything that seems to be a self-aware being.
Ditto.
However, it is also a dichotomy. Do we really even respect fellow humans to the utmost degree? Do we always treat all person like how we would like them to treat us? Sadly, no.
We are hypocrites of the greatest degree, all humans are.
Or, and I'm just spitballing here it works something like this: Mr Gutsy has a directive, in Cerberus' case it's something along the lines of "kill all intruders" or whatever. Now these intruders somehow find a way to either add an extra combat inhibitor or just alter his. His base programming is still saying "KILL ALL INTRUDERS" but he can't because there's this extra bit that stops him now. His base programming hasn't been overwritten just blocked. So he can recognize a foreign directive stopping him from functioning as he should and he bemoans this, like a BSoD almost, but he can't really do anything to fix it himself because he doesn't have any desires.
So souls come from god yes? It's nothing measurable but it puts humanity apart/above?
Homo Sapiens is taxonomically classified as an animal.