Oh snap!
You do realize that in for example new vegas -a game that is no "BGS fallout" - you had gangs styled after Elvis, Old women with rolling-pins coming after you, not to mention 5 disembodied brains lobotomizing you among other things? And that was in a series where the US government invested tons of money into fallout shelters to conduct huge social experiments so that they could successfully recolonize the wastelands, since they had cancelled their trip to space?
Fallout as a whole isn't really serious and was always a bit, as you say "silly", and how can you call BGS having incoherent writing when you literally have nothing substantial on the plot of Fallout4 yet?
This thread was - as far as I understood - about how big of a deal android slavery is, or if it is any deal at all.
So maybe try to contribute to the conversation, instead of howling at the moon about how bad Bethesda is at Fallout.
You do raise an interesting point. I think that sometimes AI supporters fail to grasp how truly alien an AI would be, perhaps because they're often depicted as human or human-like. If the AIs were housed in giant, evil-looking spiders then fewer people would make the mistake of anthropomorphising them.
On second thought, androids are programs. Robots. They're not flesh and blood, they don't deserve rights. They're man-made machines meant to perform tasks. Program them however you want, for whatever you want, they're not necessarily alive.
That may be true, but if you create a machine, make it self aware (accidentally or by design), give it the ability to think (accidentally or by design), give it emotions (accidentally or by design), give it the ability to learn (accidentally or by design) and then tell it that it is property and has no rights, don't be surprised if it decides you would look better with a 11mm diameter hole in your head. And don't be surprised if a human that does enjoy the right of self determination agrees with it and even assists in the endeavor of making you look beautiful.
We interact with AI's all the time in our day-to-day lives now. Its a question of sentience, not of appearance.
What is it about being flesh and blood that automatically demands we be given rights above that of a sentient construct of equal intelligence and cognitive ability?
Just because we're squishy?
I agree on TLW - what was the point of (letting!) Raven Rock blow up? (that was a perfectly good base...same for the mobile base crawler - Hell, I'd even say that the Enclave basically has a good cause, the problem is how they go about it!)
I'd like to know more about the androids first though - I mean if they are even close to what the Cylones in BSG are, then I will consider this (after all those were a mix of organic and electronic and almost indistinguishable from a natural human...if they are more like the Terminator (false skin over an exoskeleton, then no, they are still machines (I don't think that something that thinks in 1 and 0 can have feelings...if it no longer does that? - Then yes!))
greetings LAX
ps: Indeed, slavers first...after all they even treat their slaves badly - which The Institute doesn't (with the exception of denying them freedom...they don't beat them, starve them etc. - at least as far as we know!)
The difference is that southern africans are humans and androids are not. Androids might be alive, but they can never be human. They aren't even organic.
I think about the creation of androids to be slaves to be a very silly concept. In order for a A.I. to even exist I would think they would have to make them be spies, otherwise the limitations on the usefulness of tasks they can perform is limited by their human-like appearance. There is absolutely no reason to create an A.I. that is self-aware that looks like a toaster. That is wasted effort as far as making machines that do tasks for you is concerned. Artificial skin, faces, etc. are very out of place on something that you made to perform tasks for you. Giving it what one might call self-awareness would be useless. There is certainly application in making sure the robot has some way to identify and go around obstacles, but that doesn't make one conscious.
The thing I'm more interested in as far as a real A.I. is concerned has to do with it just being a clever fake. A facsimile that is supposed to fool people and the railroad itself is an organization that fell for that trap. Defining consciousness might not be in Bethesda's wheelhouse, though.
Ok, lets stop looking at it from Human Master vs Android Slave perspective. What if androids were slaves to other androids? Should the sentient rights be exercised then?
"Androids are people too" fits perfectly within academia and if Bethesda modeled "The Institute" after a big school then I would expect that line of thinking to be amplified and exaggerated to make a story.
Also consider that at least some of the robots in the Fallout universe appear to have human brains inside. That itself is wildly disturbing and I would be very surprised if there weren't factions pre-war that opposed such technology.
"Blade Runner" (1982) I think this movie provides the ethical basis for the entire "replicated man" quest, even expanding into Boston.
And to be clear, I fully intend to track those little svckers down slaying them one by one, blade runner style.
Them and there stupid 4 year life span.
No because they′re not real life-forms. They′re just what R.E.M sang about; imitation of life. The fact they seem to be real doesn′t change that.
Well that's a tricky part isn't it? What is it to be alive?
If you go by one of the simplest definitions, it's the ability to reproduce and and propagate your genetic information into another generation.
But by that definition, any infertile person would be borderline "not alive", not to mention any super-mutants, ghouls etc. who cannot naturally reproduce.
If you go by the more complex definition based on the fulfilling of certain characteristics such as growth, reaction, reproduction functional activity, and continual change preceding death, then you end up noticing that self-aware androids would fulfill most, if not all of these criteria (reproduction being a case apart, because does that really have to involve exchange of genetic information? A-sixual reproduction seems to indicate otherwise).
The main point being "change preceding death". Because the definition of that is a whole other can of worms (the definition of clinical death being a point of heated talks up to this date).
So can an android die? I would think yes, since even if you could recreate an android to be an exact copy of a previous version, artificial memories and what-not included, since it would be fully sentient, it could under circumstances start to follow an entirely different path than the model it was based off of. So in my book, that makes it an individual, whose "death" would entail the irreplaceable loss of said individual. That is of course, if you do not believe in a construct like the soul (which I believe doesn't per say exist, but what would I know about that?), or assume that a built construct was capable of developing one.
What I find regrettable is that people seeing the use of fully self-aware androids as not slavery, basically always stop their argument at: no it's not enslavement, because they're machines, full-stop.
It would be nice to see some more thought put into questions like: even if they are machines, how is that different from humans being organic tissue controlled by complex chemical reactions and a neural gateway?
Because they were created with a purpose? Then what is the purpose of humankind? Most would say simply surviving long enough to pro-create and then die to make place for the next generation.
Then what if you don't want to have children? Does that mean that you go against your purpose? Should you then have your behavioral "deficiency" corrected so that you will fulfill that purpose?
So why do you believe that the simple fact that they are machines, and built, makes them irrevocably unworthy of acquiring any semblance of rights and/or desire of rights? Why do you believe they are only a facsimile of life and even if it is only imitation, why would that matter? Why can't a "fake" being not have its own morals, its own culture, even it's own religion?
Because I'm really getting tired of "LOL, toaster liberation front" vs "Friggin' Nazi Slaver Confederation" arguments.
To have a body powered by a real soul.
A machine that responds and thinks is something you can interact with just like with a human, but it is still not a real life-form; it is still not powered by a real soul.
If my computer suddenly would talk back to me it would be in the form of an A.I and I would have no trouble smashing it to pieces. I would have trouble doing the same to my cat though because it is a real life-form and not anything artificial.
Ok, maybe we should try this from the other side.
100% pure logic, just like a machine would.
Humans, just as Androids have no inherent rights at all.
The "rights" you are speaking of are the result of the human race recognizing that working together, protecting each other and "weakening" the right of the strongest in favour of better teamwork is more favourable for survival than an entirely egocentric approach.
We simply used "words" to put them down and turn them into "rules" that regulate how we life together, because written law is more reliable than unspoken mutual understanding.
They are not the result of humans being inherently "good" or "benevolent" or "moral" beings, they are simply an approach to the challenges of life that enhances our chances for survival. Just like wolves, we flock together to increase our chances both as individual and as a species.
Androids are not humans. Thus this does not include them.
Having a soul has nothing to do with religion, although that word is heavily suggestive of course. I used it for lack of a better word. Lifeforce, maybe. What we all are, really.
Debatable really. We have some laws that make little sense from a purely survivalist perspective. Plus the reason nothing else is included in this strictly survivalist thing is cuz none of them match us, none of them can give us any kind of benefit. Androids could. Mind you I don't agree with the purely survivalist perspective in the first place. And yes our morality is subjective but that doesn't really lessen our adherence to it, you can continue down that road until it becomes absurdism and there's no reason to do anything ever ofc and it would be valid. A boring conclusion but not an invalid one.
That's so vague as to be meaningless. Also doesn't seem to be a very valid way to justify discrimination. "We have lifeforce!" "What's that? How do you measure it? How does it make you more worthy of existence?" "Err... LIFEFORCE!" "..."