When it comes to ensuring The Director's reforms remain: there's also the Deus Ex Machina that is Cabot.
Regardless of where he takes The Institute, my character plans to live a very, very long time along with his ageless Synth-Wife.
When it comes to ensuring The Director's reforms remain: there's also the Deus Ex Machina that is Cabot.
Regardless of where he takes The Institute, my character plans to live a very, very long time along with his ageless Synth-Wife.
Deathclaw zoo admission would be how many caps?
About University Point i doubt we've all datas to find it out, imho that was either an energy source or a weapon which the institute wanted to badly because they felt it could be dangerous or useful to their interests.
Prolly they just refused harshly and they overreacted as any other powerhouse would do.
Refuse to give hi-tech weaponry to a BoS patrol that asks you , see how it'll end
Shaun wasn't raised by Kellogg in any possible way.
Shaun was just taken from him and handed over to the institute shortly after, in no way they would let their clean dna source be corrupted in the wastelands for so long.
What we've seen in the memory field is just the Synth-Shaun which is sent to DC with him until he gets away with X6-88 and Kellogg himself discover what kind of tricks he had been played around.
By that moment, Shaun's already the Director who sent him there and who "recalled" Synth-Shaun awaiting for you at the Institute in his own plot, trying to put all puzzle's pieces into position.
And about Kellogg he was reckless and merciless, but evil, well, he did nothing truly evil, he's a killer, true, but thats pretty much his job.
Not much different from our character when he/she kills who gets told to kill for caps, maccready or danse.
The DC guard guys shoting at people in the market square aren't nicer to be honest, if we take it that way, and they doesn't even take a stand while there's a gunfight / murder intent within' their own town.
On the rest i might agree, the good thing is that somehow all factions tend to be gray-ish and all of them does something "wrong" to some extent they have to recover from.
Every faction had inside troubles and all of them deals on their own means to the dissent, and just about it, i come back to my daily Virgil's Theorycrafting:
- X6-88 greets Virgil in the cave peacefully while he just would tear apart the railroad HQ or any BoS vertbird coming across, i brought him there 3 or 4 times , "finally we meet, Virgil" being the hardest threat he ever pulled out, being way meaner to my settlers than to a KoS ordered deserter.... suspicious at the very least.
Still something doesn't match to me, and this driving me crazy.
What if actually Virgil would've been sent out over Shaun's or directorate idea for some unknown reason, using the FEV with the intent of curing it afterwards? maybe somebody else within' the institute wanted him death, but that not being Shaun's who actively protected him ensuring he'd get, somewhat, cured afterwards?! maybe the previous director? or Zimmer? ugh, mindblowing.
What I'm saying is: we KNOW that a human is a human and deserves to be treated as such. To the extent that any other entity deserves to be treated as such, it is always in terms of "how well do they fit a 'typical' and acceptable human," not simply "is it sentient."
As Mith was saying (and I think he/she gets this point) just because a machine can "think" does not mean that making it work all day is "slavery."
I for one am not in favor of intrusive and harmful experiments being done on any mammal (much less a Great Ape) unless it can be argued quite convincingly that the benefits to humanity are immense and irreplaceable. Generally they are not. Often times other avenues of research can suffice. I am also in favor of zoos being required to have have higher standards of living, for mammals and certainly for primates. Hunting of Great Apes should be internationally outlawed.
What I'm saying is: I'm in favor of 'granting' Great Apes many, but not all, features of "personhood" if not "human rights." But regarding them as "human" just because they have culture, can master symbols and cooperate to some degree? No. Even if they had the mastery of these skills that Synths have in game, still: no. Chimps and bonobos are not human and they are not capable of BEING human. Synths are also not human and I assert, not capable of BEING human. They did not grow up in a family, they did not learn language the way a human child does, they did not "learn" anything, it was apparently "programmed" into them. It is impossible for something that did not and cannot live a human life to BE human, no matter how close the similarity in behavior and psychological capacity.
Do Synths deserve to be treated humanely? Ethically? Compassionately? I'd have no problem with that, as long as the caveat which is applied to Great Apes is also applied to Synths: they are potentially dangerous non human entities that are not humans and should not be confused as being human. We are stewards of nonhuman animals and it behooves us to treat them well, but we also use them as servants and do not grant them full human rights. We are also responsible for keeping them 'under control' and are held accountable when they cause harm which we should have prevented.
I've said it before in recent discussions on this board about this: I think the character "Data" as he is presented in at least the first couple years of "Next Generation" is an exemplar here. He has been created to be very human like, but he has also been 'programmed' to never confuse himself as a human nor to impersonate a human. He serves humanity, and while his companionship is appreciated by his fellow crew members he is also not confused as "human" in general. This I think is a perfectly reasonable, sustainable and judicious model for how "artificial entities" with advanced behavioral capacity can (and maybe will) fit harmoniously into human life.
But programming them to impersonate humans, that is unethical I think.
I would disagree strongly with us being "stewards" of synths, somehow. If we're responsible for their creation, then yes, we're responsible for their rearing and acclimatizing, but as if they're our children, not our lessers. If they have our capability of reason and emotion, then they are fundamentally our equals. Maybe this isn't like Data, or how you'd choose to make them, but it is how they were made.
If they "are our children," then we'd need to raise them as children, with the ideal balance of love, consistency, rules, sanctuary and encouragement to explore. Instead "we" (meaning the Institute in game) mass produce them like artillery shells . . .
A thing that goes from disarticulated pieces of skeleton and other organic building components to standing up from its cooking vat, superficially fully advlt with advanced locomotion, language and decision making abilities in the span of less than two minutes on the "assembly line" and then steps into some "black box" for "processing" has been denied the opportunity to be a child of humanity. It is clearly a machine, not a human.
If the Institute insured that every single one of them was programmed to be INCAPABLE of pretending to be a human and/or of harming a human (except perhaps in the most extreme and obvious situations of defense, which is all kinds of complicated to even think about) then I'd have no problem with them at all. Make millions of them and give one to every family in the Wasteland as a Guardian Angel.
Instead, the Institute acknowledges they are "not human" while simultaneously programming the damn things to imitate humans for literally decades on end, while their targets seem to generally wind up dead. And then they wonder why the outside world fears and loathes them!?
The fact that it is so alluring even to us is what is fascinating about this story Bethesda has woven. It is very rich storytelling that is for sure!
Like our children. No, the details are different from those of human children, but our responsibilities to them are, in spirit, similar. And yes, I would drastically change Institute policy on the matter.
Oh boy.... yet another one of these threads.
Now it deviates into philosophical territory, but what makes us better suited to judge over other species? Just because we can? We're not that special and it's not a given that we will be around forever.
Well, rest assured that I would make the same argument no matter what they look like, and indeed believe basically the same thing about the more intelligent nonhumanoid robots, such as Codsworth.
I think the movie https://en.wikipedia.org/wiki/Her_(film) covers that quite nicely.
Communication, IMO, is the more important feature for developing empathy. Not what an AI construct looks like.
And for those unfamiliar with the https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
I have little doubt that Howard, the head creative mind, and many other developers are amply familiar with Asimov, dike and the variety of classic Sci Fi literature that deals with the themes. Agree, the three Laws ethos is absent from the Institute and it is precisely the problem with them.
In his later works, Asimov writes that robots invented a "zeroth" law covering humanity as a whole and not individual Humans as the first law does. Interweaving his robot stories with his Foundation series, he comes to a point where a "gen3-type" robot is the head of the secret Second Foundation which has been guiding mankind for centuries. The parallels between the Institute-Synths and Foundation-robots would be striking but for the fact that synths seem not to have built-in guidelines and the Institute was started by accident rather than by a visionary. Also, the foundation did not create robots.
As for how the Institute might run a society from its ivory tower, think about what a bunch of MIT or Harvard professors (which is where the Institute came from) might do. We really don't have to look farther than the last seven years. I will leave it there lest I devolve into politics.
Well, in real life? No one else has stepped up to take over the job
In a Sci Fi context like FO4 with "Synths," it would become more muddled I acknowledge. But the key thing is that our shared humanity necessitates all of us behave within proscribed guidelines (which vary dramatically between cultures it is true). If we assume that the Western humanistic philosphy of ca. 1945 was the starting point for the FO4 world, there is little need to invoke any exotic "moral standards" and it is perfectly legitimate to judge the behvior of actors in the game world in terms of those standards, which have subsequently become internationally recognized as "the" standard.
So in short: if you make a robot and it starts bossing people around, maybe even roughing them up because it asserts that it is superior, it will according to prevalent moral and ethical standards be put down and you will be held accountable. Being "accountable" is the flip side of being a "steward."
So how long do you think the Institute can continue with Gen-3 (or higher) production before they inevitably get replaced, either by the will of the Director or through a Synth rebellion. It would seem unless they have fool proof method of detecting Synths that they are on an inevitable course of Synth replacement.
I agree. They have opened pandora's box so wide it just seems like a matter of time before two, then three, then four, then 12 coursers decide "Screw these jerks. We're takin' over!"
I saw a talk at a conference probably about 10 years ago. The guy was a Ph.D. but not an academic, he worked for a Rand type of group. His talk was about artificial intelligence and what it boiled down to is: "In our lifetimes, we will have to deal with machines that think." I don't know how true that really is, but my question to him was "So what if they can think? If they do not care, then they will probably always be perfectly compliant little 'slaves.'"
This is the part of "artificial intelligence" discussions that so often gets completely overlooked; I don't think I've ever seen anyone other than me bring it up. A computer program can emulate thinking to some degree even today. It can maybe even learn, problem-solve, perform operations at astounding speeds, multi-task far better than any human ever will.
But at the end of the day, if that 'thing' doesn't have any desire, any emotion, any cares, it will just sit there. "Self-awareness" is a lot more than just computing power or cognitive ability. Perhaps in another 50 years when the current hot topics have been fully played out and the realization dawns that "Oh wow, we made really 'smart' but completely boring and apathetic things that are no threat to anyone unless a specific person tells them do something harmful" they will start to focus on helping the psychobiologists, the linguist, the ethologist, the developmental psychologists, etc. to actually understand what "emotion" is and how it really differs from "cognition" and then we might be on our way to making true artificial consciousness, or actually I think the better word would be artificial psyches.
"I met Shaun today. After months of searching, I finally found my son and he's now older than I am, in appearance. Taken from me for his blood, they've convinced him the Institute is in the best interest of mankind, though Shaun makes no comment on how this is to be achieved. 'Boogeymen' is often the word used to describe the Institute, and I can now see with my own two eyes why this moniker is fitting.
Stealing people for experiments and replacing them, and for what? Man may be self destructive, but no one deserves to be killed for the sake of an artificial replacement.
Shaun's words felt so empty. He allows me to leave my frozen crypt, but rather than send escorts to take me to him, he allows me to fight my way to him. The untold number of people I've killed, tricked, or hurt just to find my son. I didn't ask to be a part of this world, yet those actions will now weigh heavily on me for the rest of my life.
Yet, as much as all this hurts, and as much Shaun is to blame, he's my son and I cannot kill him. I hope he understands why I cannot join him, but I lost him twice now, so whatever fate he has brought upon himself is his burden to carry, even if it means I must lose my son for the third, and final, time and it looks like the Brotherhood of Steel will deliver this burden to him."
-entry from Violynne's diary
The program does seem to be inevitably doomed, since creating a 'race' of people who look human, who can even just mimic human emotions and then keep them as a slave labour force and cannon fodder for your military arm seems like your loading the deck for your own destruction.....sooner or later, it just seems a matter of time.