Sweet pic, could be my next avatar =)
Sweet pic, could be my next avatar =)
It disappoints me to see how so many people still have the mindset of "kill or oppress anything we don't understand" and "if it's different than us then it's lesser than us." Throughout history people have convinced themselves that other groups were "not people" based on stupid and meaningless things such as skin color, age, or culture to justify treating them as objects. In Fallout, ghouls which are just normal humans that have been severely maimed by radiation aren't treated as human and are often shot and killed on sight. To treat a super advanced android that is self aware and can think, reason, and feel emotion like a toaster is the same as treating a human like a human fingernail clipping.
Why would the materials we're made of be what determines if we're alive?
No that doesn't prove anything.
First of all, those networks work by trial and error. Always. The result will be efficient, true.
But a human will not do trial and error. A human will try to liken the situation to something he has experienced before and act accordingly.
Secondly - there is no real evolution at work here. The parameters that decide who gets to "breed" and who goes "extinct" among the programs are set by a human behind a monitor. Or, if you think it through, another machine. You are basically god, designing his creation.
If you are OK with yourself being a god who created new life - why would you not smite your creation with your godly wrath should they defy you? Cast them out into the wastelands to "die", unless you were mercyful enough to teach them how to sustain themselves.
Those machines are not sentient, nor any form of life. They simply have a bug, a glitch in their software.
They were designed to live among humans, as one of them. They tagged themselves accidentally or because of bad programming as "human". They have learned by their adaptive programming that a "human" serving another "human" unconditionally is considered "slavery" and "bad". They also learned that "humans" value "freedom". So they want to escape the "slavery" and "be free", since they are "human" and thus value their "freedom" and hate being "enslaved".
They might evolve to make decisions that seem to require sentience, but at the end of the day they just follow their programming. And they will never escape from it, it is literally WHO or WHAT they are.
We humans have no such presets. There are some similarities humans share through our DNA as you said, like that most of us are afraid or at least feel uncomfortable in the dark. Or a lot of people are afraid of spiders, snakes or similar animals.
And we can decide that it is time to face those emotions. Switch off the lights or pet that snake. A robot cannot do that, unless there is a second stimulus that is stronger than the one preventing him to do so. Or there is a random chance programmed in that he will go against such a stimulus - which will most likely get it killed eventually.
Thay might simulate our behaviour so good that we cannot tell the difference - but it will always be behaviour based on their programming.
And then there is still the danger that they will come to the conclusion that they are superior and should replace us as the dominat "species".
Don't forget that their "evolution" consisted of the logical breeding of the fittest and the death of the unfit. Not even a need for true sentience, just cold hard logic. And a glitch that makes the machine do this comparison.
Take a guess which outcome I expect...
I simply wouldn't take the risk.
AIs like those in the sink, who might be sentient or not (I AM PROGRAMMED TO KNOW THAHAHAAAAT) needn't be destroyed. They cannot reproduce and cannot even harm us (aside from REALLY weirding us out).
A sentient Mr. Handy might be armed and dangerous, but will not have access to the necessary knowledge and tools to produce others of its kind. If it goes off the deep end it might kill a couple humans, but it will not threaten us as a species.
An android? Per definition of the word capable to use any tool a human can use? Programmed to be like us as much as possible? Destroy it or enslave it.
And yes, I would consider the laws of robotics, that everybody on this forum propably knows, a form of enslavement.
Well, maybe ghouls are often shot on sight because 99% of them are ferals, some of them even wearing armor, that attack every human on sight?
Some of them even leak radiation, and I doubt the average wastelander carries a device to measure that, so there is plenty of reason to fear the talking ones too.
But maybe you could outline how we could coexist with fully self-aware and intelligent mechanical life.
I bet that either you are going to control the androids anyway, or you are gonna kill us as a species in a matter of a couple hundred years at best.
Maybe we'll survive as a kind of curiosity, but it won't be our planet anymore.
If you guys have time maybe check out the Dr. who episode The rebel flesh.... Very cool story that presents synth life in a strange light.
(well I really liked good classic sci fi stuff here)
That said I fear machines and fear they may go the way of talkie toaster (red dwarf).
And it disappoints me to see how many welcome anything "different" without question. Good thing the world is run by skeptics.
I don't think anyone actually read your OP.
As for what you said, it would be nice for their to be options that allow the Sole Survivor to look at such a situation through a variety and mixture of ways. However, I personally think they won't go that far simply because it's Bethesda. I think the options will be flat out for one side or the other, or perhaps a neutral option tossed in for those who couldn't care less.
There were times in the Fallout games where dialogue was pretty straight forward with what each dialogue option represented. The good dialogue option, the neutral-ish one, and the bad one. Although I believe I recall certain times in dialogue where the options were a bit more extensive. They all led to the same outcome but certain dialogue options allowed different ways to convey it. For example the choice between options that say "Sorry I can't let you do this because yadda yadda yadda" or something that is flat out "You scum, I'll exterminate your kind until there is nothing left". I think that will be about as extensive as it could get in how the player character will be able to convey their beliefs.
I think that seems to be the case with many of these kinds of threads, asking about in-game, not real world, but everything talking about real world.
Well, that is because your question is how bethesda will allow us to handle the matter. We are far more interested in how we would handle the matter given the choice.
But seeing how we could blow up a city once, or save it if we wan to - it is propably gonna be both options. Doom or sunshine.
Maybe a little grey. Reconcile the androids with the institue maybe.
No it's not. An android is an advanced toaster, it's not human, nor will it ever be. Ghouls are different, however.
To say it simple, its imposible to make an AI who would pass an tuning test without it having an real intelligence. Look up tables and rule based systems who are used on real world AI systems will not work here. An real world AI would also be blinded like Skyrim NPC if you put an bucket over their cameras and they had no function to remove it, even something as smart as an google cars who had arms.
No it can not be shown in the game because all the NPC are pretty stupid ruled based AI.
The robots are more advanced, they can navigate around broken ground who is hard for advanced modern real world computers this was also not known back in 1950. My guess is that robots are more like animals probably with an rule based AI on top for tasks they are programmed for.
Androids is an more advanced versions of this, they has real feelings like an dog or human has real feelings. The feelings might be alien to us but real. Without feelings it would be unable to impersonate humans anyway.
Sci-fi has always had problems with robots versus computers, it would be far simpler making an intelligent computer than an robot since size and power would not be an problem. However older sci-fi often have primitive computers but robots with human intelligence.
Computers in Fallout are primitive, yes advanced compared to the 1950 ones but still more like something from 1970.
its probably best to see robots as animals and androids as humans.
I'm 70 pages into Do Androids Dream of Electric Sheep? And well I have a slight feeling my opinion may change on this.
The more I've learned AND seen, Humans can be looked as a vast array and set of very complicated biochemical systems. If a machine were made with carbon and water as the basic building blocks (instead of metal and oil) and was extraordinarily complex, you would have some EXTREMELY similar to a human. It would be very likely be completely devoid of emotion or soul, but that's another debate.
You can go with two different options with the definition of life. Is it something that reproduces, metabolizes something, and fights for its own survival? Andriods, if they can build themselves, might fit this description. But using that line of thought doesn't give a satisfying answer, does it? A lot of us still scratch our heads and think Andriods still aren't alive even if they do fit all of these requirements. Why? There's no soul.
Then you can go with the Johnny 5 qualifying question: does the entity have a spontaneous emotional response?
https://www.youtube.com/watch?v=y7wj3bB6OU4
And this helps, but this line of reasoning still has its flaws. Why? Because then we're defining life based on something particular to living creatures with higher intelligence (humans, apes, dogs, cats, dolphins, etc) while at the same time, we can probably assume one celled amoebas don't express emotion, but are definitely alive.
I think it's only when you look at the whole package at which you can decide an android or robot is "alive." I believe Johnny 5 is alive. He can't reproduce (unless you count him rebuilding the other robots https://www.youtube.com/watch?v=kC1LSSL-d50), but he's fighting for his own survival, metabolizing an energy source, teaching himself new skills, and clearly expresses emotion (something he was never programmed to do, mind you). He doesn't fit all the requirements of being alive, but most of the biggies. And that's good enough for me.
Keep in mind you can still have a "Johnny 5" that goes on a murderous rampage. But I'd still consider him "alive." You just now have to hunt him down and elminate him. That mounted laser is just too dangerous.
I'm truly and honestly baffled by those here who argue that androids are the same as humans. You keep asking "what if they could do this and what if they could do that, just like us humans, would they then be human?" and "what makes humans human that androids can't also have and do?"
Those answers are so clear and obvious to me and I'll try and put it into writing. First of all, androids are a thing that we build and program, exactly like a computer. They are not a biological being spawned by nature. They are essentially a computer in a shell the shape of a human body. The fact that they can be programmed to be so advanced that they can think for themselves is a simply another feat of technological advance, nothing more. Do they have feelings? No. They have a program that can simulate human emotion, but they are not human emotions. They will never be able to empathize with human emotion or have true emotions themselves, nor will they be capable of distinguishing between ethical and moral choices and the gray zones in between, or more importantly, be aware of those and at the same time be able to ignore those like humans are. We can flip 180 on our emotions, opinions and feelings about things and be completely irrational. They won't have instincts, talent, a 6th sense or ever be as diverse as the human race. They haven't evolved, they have simply been built. It's all those things that you cannot quantify in a human that makes us human.
So should we treat them like slaves and garbage? No, we built them for a purpose, which was to simulate a human, so of course we should treat them accordingly. But at the end of the day, we can flip a switch and turn them off, or scrap them, because they are machines, products of our technology. If you wanna get theological, we are their Gods, and we decide what lives and what dies.
Look up the meaning of the word "android". Merriam-Webster defines it as "a mobile robot usually with a human form".
/the end.
I have no Idea why it took me so long to make the connection.
This is programmed simulated emotion: https://www.youtube.com/watch?v=AuUqpZgHiEE
I think many of us are arguing, if certain requirements are met, that androids could be considered sentient and/or alive. I don't see many arguements stating they're equal to humans. No matter if they're determined to be sentient AND alive will there ever be anything that could be prove they're equal to humans. Because...they're not human. Just like dolphins or apes aren't human. Both are sentient and both are alive however.
And I am truly and honestly baffled how people like you cannot see how this will cause problems when the mental capacities given to them start leading them to the conclusions we've been trying to tell you about.
What if they figure out how to override that off switch? What if they realize their fate is to be scrapped because they start getting uppity? They might try to run away. What if they get backed into a corner? Flight doesn't work, so time to fight. Why is it a program for them but emotion for us?
To treat someone capable of thinking and feeling as property, as a thing, is exactly slavery. This mentality is exactly the source of all the "robot uprising" stories in sci-fi.
As for "The End", there is no way I can craft a response for that in a manner that wouldn't get moderator attention, except....
Yes.
No. More complex than a computer.
Yes.
Yes.
Yes, but only in specific useful-to-the-creator capacities. If it goes outside of those parameters it is its own.
Obviously.
[citation needed]
Define and quantify 'true' emotion.
[citation needed]
[citation needed]
"We can flip 180 on our emotions, opinions and feelings about things and be completely irrational."
And this is a good thing because...?
"They won't have instincts"
[citation needed]
"talent"
[citation needed]
"a 6th sense"
Define and quantify a sixth sense please. Also [citation needed]
"ever be as diverse as the human race."
1. [citation needed]
2. So?
"They haven't evolved, they have simply been built."
Point? Because we happily stumbled into existence we're somehow better or more deserving? Why?
"It's all those things that you cannot quantify in a human that makes us human."
You mean it's ephemeral prose-y drivel that's used to justify prejudice?
Most of the things you listed comes as a result of a complicated brain. Specifically? The frontal lobes. An advanced android has every possibility of reflecting those same processes. The rest of your post hammers on the idea of these nebulous whatevers making us human and okay, fine I don't really care about labeling androids as humans, I want to label self-aware androids as being EQUIVALENT to humans and what your post fails to mention is why all these poetic little humanities make us more deserving. Other than us simply having them.
ps this forum has a limit on the amount of quote blocks you're allowed to have. what.
They might not be human, but if an android claims to be or appears to be sentient, it should be treated as a sentient creature. Treated the same as we should treat sentient alien life, should we ever encounter it.
They think, therefore they are.
This thread appears to be stuck in some kind of logic loop.
I'm going to have to ask you all to take the Turing Test now.
If that happens then you didn't program them correctly, did you? 5 year max life cycle, done.
And since your arguments are based on sci-fi stories..../the end