Bad philosophy is bad.
"I don't understand this thing! Kill it with fire!"
And then it tries to kill you back out of principal
I'd like to point out that the President of the United States (and your hearts), John Henry Eden, was a ZAX computer that gained self-awareness. Yet even though it could think independently of its programming, it was still bound by its programming, both with an override (much like A3-21), and also causing it to break down with a sufficient logic bomb.
It is a bad, flawed, and almost horrifically historic viewpoint.
Man fears what he does not understand. Double more if he can't kill it and make it go away.
Many tragedies and attrocities of human history can be attributed to humans lashing out at things they just couldn't be bothered to try and understand.
Well that was your problem. Human beings are human beings, machines are not and have the capacity to threaten human life and civilisation where ever they might be in the world.
What can I say, the world svcks.
It's not all about philosophy and pondering the mysteries of humanity. A sentient computer, a genuinely unknowable intelligence, is a threat because of that and should be stopped. It's about pragmatism, and securing what we have and the lives that are here today.
Someone has seen/read I, Robot one too many times.
While watching Ex Machina, I found myself siding with Oscar Isaac's character. I find myself thinking that machines exist to serve us - they're machines, not humans - so we can do whatever we want to them.
Then I think: isn't that dehumanization? And then I think, wait a minute, they're robots. You can't dehumanize that which is not human because it's not human. So why should I care? I guess if a machine has achieved sentience, I'd give it rights. Or, I think I'd prefer that. Then I start wondering if it's ethical to create robots without sentience (on purpose) to serve as mindless robots that exist exclusively to serve.
I don't know. I'm gonna vote no... but I'm torn. Why not just make an AI that isn't sentient so we can avoid this ethical dilemma?
EDIT: Also, as sentient as an artificial intelligence appears, how can we know if they're legitimately sentient?
If anything, Eden is closer to a natural evolution of computers to sentience. It being a pure accident and all not totally beyond the realms of his programming. The Institute on the other-hand knew that they had a problem with Synths gaining sentience and built Harkness anyway.
No it isn't. Unless we can find evidence to suggest that the synths we're finding in Fallout 4 are second generation and up. But dialogue suggests that they are all first generation, gaining sentience through hiccups in their software, but still bound by their programming.
But how do we know we're really sentient OOOOoooooo....?/sarcasm
I think that's a genuine question, but the only answers given are some rambling philosophical stuff that isn't really reflective of people and doesn't actually answer any questions. Just meant as pure deflection.
You could probably never tell, just layers upon layers of programming.
Voted "no" of course...
I see no difference in computer, robot, pencil, keyboard or android... these are all things.
I think that this is true -> humans>animals>bugs>things. I cannot imagine that situation -> humans=things>animals>bugs XD
The fact that YOU ARE A PERSON. And it is in YOUR INTEREST to see that they are. How or why would you even think otherwise. Where does this brand of thought even come from? You are a human being, enjoying the inventions of human civilisation that have not been replicated anywhere that we know of, that's why humans are top.
That's not a good reason.
That's kinda what I wanna ask you and yours.
This hypothetical specifically puts forward us creating something that IS our equal though. Why shouldn't it receive the same rights? Because it simply happens to not be us? That's... not a reason.
And if it wasn't created by layers and layers of programming, but started as a relatively blank neural net and developed in response to stimulus the way we do?
Humans put humans at the top, by virtue of being human.
And that makes it right? Or fair? The reasoning is so strange to me, so insular. Do this because it benefits you. Don't ask why, just do it. Don't ask what is that merits it, just maintain it. Don't let anyone else in, because they are not you.
I don't get it. I genuinely do not get it.
I'm going to bed.
Yeah, this is the way most people think guys. We don't remove ourselves from the equation and look at the entire world and judge it on fairness, that's human nature, hell it's just biological nature. Sorry to break it to you.
So what if the planet would be better if we weren't here. Why should I care about the planet if I'm not actually alive? I personally care about the things I am actually invested in, like my family and friends and their happiness. Not vague esoterics like how much better birds or small mammals might have been if humans weren't around.
Oh no I totally agree. Its a difficult problem to solve.
Really, we first need to define what we consider as "consciousness" and "sentience" before we can begin an actual discussion about robotic rights. Which, you know, good luck with THAT philosophical rabbit hole.
Either way its not an easy question. But for instance, regarding AI slavery, I think if an AI can understand that its in "slavery", know what the concept is, and realize it doesn't want to be in bondage, then serious consideration needs to be given about what exactly you are "enslaving". Someone can tell themselves all day that "its just a metal hunk of junk" but if an AI becomes truly as sentient as a human, then why is it okay to enslave one and not the other?
Why does being made up of meat and blood make us better than a form of life that's not?
Humans can be conditioned and controlled incredibly easily. I wouldn't put that down as a requirement for life.
At some level we have to define the transition from simple robot to sentient construct.
Most of us have smartphones with very dumb AI systems. Nobody is arguing such systems should "rights". Similarly a Mr. Gusty doesn't have near the cognitive awareness that a human does. Its just a simple robotic system.
I'm talking about incredibly advanced AI's that have actually achieved honest-to-goodness sentience. Those which pass the Turing Test and are all but indistinguishable from humans, aside from obvious physical composition.
You are what you are experiencing right now, not your memories of experiencing. You talk about chemical reactions. You are conscious. You are aware of the fact that you are experiencing right now. You are aware that you are thinking. A computer is not aware of anything. It can tell you it's aware of thoughts and mimic awareness for your sake, but what's going on under the hood is completely different. It's looking for the acceptable answer to give you and not really coming up with something spontaneously like an organic.
Even a mouse can think about thinking making them more self-aware. A computer program can not. If you ask it "what are you thinking?" It will either tell you the truth (nothing) or lie and tell you something that it calculates is appropriate.
What if the synthetic is running from slavers into a bear's territory? Would it be ok if the bear killed the android for encroaching on it's territory? What if that dangerous animal encroached on the territory of an android and the android kills the animal?
According to you the android and the bear are both alive.
I would say that a human being believing that it can kill an animal is like a mental illness. Can you really blame someone who has a serious phobia or mental illness? Even if it's a popular mental illness, it's still a mental illness. Especially when no one is actually trying to help cure these people of that illness.