If you have doubts about supporting the Institute

Post » Tue Jan 12, 2016 5:34 am

When it comes to ensuring The Director's reforms remain: there's also the Deus Ex Machina that is Cabot. :ph34r:




Regardless of where he takes The Institute, my character plans to live a very, very long time along with his ageless Synth-Wife.

User avatar
FoReVeR_Me_N
 
Posts: 3556
Joined: Wed Sep 05, 2007 8:25 pm

Post » Tue Jan 12, 2016 10:36 am

Deathclaw zoo admission would be how many caps?

User avatar
Crystal Clarke
 
Posts: 3410
Joined: Mon Dec 11, 2006 5:55 am

Post » Tue Jan 12, 2016 5:10 am

About University Point i doubt we've all datas to find it out, imho that was either an energy source or a weapon which the institute wanted to badly because they felt it could be dangerous or useful to their interests.


Prolly they just refused harshly and they overreacted as any other powerhouse would do.


Refuse to give hi-tech weaponry to a BoS patrol that asks you , see how it'll end :P



Shaun wasn't raised by Kellogg in any possible way.


Shaun was just taken from him and handed over to the institute shortly after, in no way they would let their clean dna source be corrupted in the wastelands for so long.


What we've seen in the memory field is just the Synth-Shaun which is sent to DC with him until he gets away with X6-88 and Kellogg himself discover what kind of tricks he had been played around.


By that moment, Shaun's already the Director who sent him there and who "recalled" Synth-Shaun awaiting for you at the Institute in his own plot, trying to put all puzzle's pieces into position.


And about Kellogg he was reckless and merciless, but evil, well, he did nothing truly evil, he's a killer, true, but thats pretty much his job.


Not much different from our character when he/she kills who gets told to kill for caps, maccready or danse.


The DC guard guys shoting at people in the market square aren't nicer to be honest, if we take it that way, and they doesn't even take a stand while there's a gunfight / murder intent within' their own town.



On the rest i might agree, the good thing is that somehow all factions tend to be gray-ish and all of them does something "wrong" to some extent they have to recover from.


Every faction had inside troubles and all of them deals on their own means to the dissent, and just about it, i come back to my daily Virgil's Theorycrafting:



- X6-88 greets Virgil in the cave peacefully while he just would tear apart the railroad HQ or any BoS vertbird coming across, i brought him there 3 or 4 times , "finally we meet, Virgil" being the hardest threat he ever pulled out, being way meaner to my settlers than to a KoS ordered deserter.... suspicious at the very least.


Still something doesn't match to me, and this driving me crazy.


What if actually Virgil would've been sent out over Shaun's or directorate idea for some unknown reason, using the FEV with the intent of curing it afterwards? maybe somebody else within' the institute wanted him death, but that not being Shaun's who actively protected him ensuring he'd get, somewhat, cured afterwards?! maybe the previous director? or Zimmer? ugh, mindblowing. :nope:

User avatar
Kitana Lucas
 
Posts: 3421
Joined: Sat Aug 12, 2006 1:24 pm

Post » Tue Jan 12, 2016 11:15 am


What I'm saying is: we KNOW that a human is a human and deserves to be treated as such. To the extent that any other entity deserves to be treated as such, it is always in terms of "how well do they fit a 'typical' and acceptable human," not simply "is it sentient."



As Mith was saying (and I think he/she gets this point) just because a machine can "think" does not mean that making it work all day is "slavery."



I for one am not in favor of intrusive and harmful experiments being done on any mammal (much less a Great Ape) unless it can be argued quite convincingly that the benefits to humanity are immense and irreplaceable. Generally they are not. Often times other avenues of research can suffice. I am also in favor of zoos being required to have have higher standards of living, for mammals and certainly for primates. Hunting of Great Apes should be internationally outlawed.



What I'm saying is: I'm in favor of 'granting' Great Apes many, but not all, features of "personhood" if not "human rights." But regarding them as "human" just because they have culture, can master symbols and cooperate to some degree? No. Even if they had the mastery of these skills that Synths have in game, still: no. Chimps and bonobos are not human and they are not capable of BEING human. Synths are also not human and I assert, not capable of BEING human. They did not grow up in a family, they did not learn language the way a human child does, they did not "learn" anything, it was apparently "programmed" into them. It is impossible for something that did not and cannot live a human life to BE human, no matter how close the similarity in behavior and psychological capacity.



Do Synths deserve to be treated humanely? Ethically? Compassionately? I'd have no problem with that, as long as the caveat which is applied to Great Apes is also applied to Synths: they are potentially dangerous non human entities that are not humans and should not be confused as being human. We are stewards of nonhuman animals and it behooves us to treat them well, but we also use them as servants and do not grant them full human rights. We are also responsible for keeping them 'under control' and are held accountable when they cause harm which we should have prevented.



I've said it before in recent discussions on this board about this: I think the character "Data" as he is presented in at least the first couple years of "Next Generation" is an exemplar here. He has been created to be very human like, but he has also been 'programmed' to never confuse himself as a human nor to impersonate a human. He serves humanity, and while his companionship is appreciated by his fellow crew members he is also not confused as "human" in general. This I think is a perfectly reasonable, sustainable and judicious model for how "artificial entities" with advanced behavioral capacity can (and maybe will) fit harmoniously into human life.



But programming them to impersonate humans, that is unethical I think.

User avatar
Dagan Wilkin
 
Posts: 3352
Joined: Fri Apr 27, 2007 4:20 am

Post » Tue Jan 12, 2016 3:01 pm

I would disagree strongly with us being "stewards" of synths, somehow. If we're responsible for their creation, then yes, we're responsible for their rearing and acclimatizing, but as if they're our children, not our lessers. If they have our capability of reason and emotion, then they are fundamentally our equals. Maybe this isn't like Data, or how you'd choose to make them, but it is how they were made.

User avatar
Hazel Sian ogden
 
Posts: 3425
Joined: Tue Jul 04, 2006 7:10 am

Post » Tue Jan 12, 2016 4:15 pm

If they "are our children," then we'd need to raise them as children, with the ideal balance of love, consistency, rules, sanctuary and encouragement to explore. Instead "we" (meaning the Institute in game) mass produce them like artillery shells . . .



A thing that goes from disarticulated pieces of skeleton and other organic building components to standing up from its cooking vat, superficially fully advlt with advanced locomotion, language and decision making abilities in the span of less than two minutes on the "assembly line" and then steps into some "black box" for "processing" has been denied the opportunity to be a child of humanity. It is clearly a machine, not a human.



If the Institute insured that every single one of them was programmed to be INCAPABLE of pretending to be a human and/or of harming a human (except perhaps in the most extreme and obvious situations of defense, which is all kinds of complicated to even think about) then I'd have no problem with them at all. Make millions of them and give one to every family in the Wasteland as a Guardian Angel.



Instead, the Institute acknowledges they are "not human" while simultaneously programming the damn things to imitate humans for literally decades on end, while their targets seem to generally wind up dead. And then they wonder why the outside world fears and loathes them!?



The fact that it is so alluring even to us is what is fascinating about this story Bethesda has woven. It is very rich storytelling that is for sure!

User avatar
Valerie Marie
 
Posts: 3451
Joined: Wed Aug 15, 2007 10:29 am

Post » Tue Jan 12, 2016 8:23 am

Like our children. No, the details are different from those of human children, but our responsibilities to them are, in spirit, similar. And yes, I would drastically change Institute policy on the matter.

User avatar
Leilene Nessel
 
Posts: 3428
Joined: Sun Apr 15, 2007 2:11 am

Post » Tue Jan 12, 2016 2:19 pm

Oh boy.... yet another one of these threads. <_<

User avatar
Sasha Brown
 
Posts: 3426
Joined: Sat Jan 20, 2007 4:46 pm

Post » Tue Jan 12, 2016 7:05 am

I think that there is a big problem when it comes to discussing synths versus, say, an ape. The fact that synths look and act human only serves to cloud the issue. Because they look and act like us it makes it very easy to empathize with them. It becomes very easy to project our own thoughts, feelings and desires onto them, even if they don't actually share them. It's very easy to get caught up in emotions surrounding them.
User avatar
Dean Ashcroft
 
Posts: 3566
Joined: Wed Jul 25, 2007 1:20 am

Post » Tue Jan 12, 2016 5:24 pm


Now it deviates into philosophical territory, but what makes us better suited to judge over other species? Just because we can? We're not that special and it's not a given that we will be around forever.

User avatar
MISS KEEP UR
 
Posts: 3384
Joined: Sat Aug 26, 2006 6:26 am

Post » Tue Jan 12, 2016 2:31 pm


Have you seen the skyrim forums after a month of release and the civil war? Yea expect this to be waaaaaaaay bigger and more plentiful. We'll be talking about this stuff until 2020.
User avatar
lolli
 
Posts: 3485
Joined: Mon Jan 01, 2007 10:42 am

Post » Tue Jan 12, 2016 4:23 am

Well, rest assured that I would make the same argument no matter what they look like, and indeed believe basically the same thing about the more intelligent nonhumanoid robots, such as Codsworth.

User avatar
Isabel Ruiz
 
Posts: 3447
Joined: Sat Nov 04, 2006 4:39 am

Post » Tue Jan 12, 2016 7:27 am


I think the movie https://en.wikipedia.org/wiki/Her_(film) covers that quite nicely.



Communication, IMO, is the more important feature for developing empathy. Not what an AI construct looks like.

User avatar
Christine Pane
 
Posts: 3306
Joined: Mon Apr 23, 2007 2:14 am

Post » Tue Jan 12, 2016 5:37 pm

For every Terminator/Skynet there is a Bicentential Man, AI or Her giving examples of a synthetic being that doesn't end up destroying the world.


I'm guessing that there was no Asimov in the Fallout universe to create the three laws.
User avatar
Travis
 
Posts: 3456
Joined: Wed Oct 24, 2007 1:57 am

Post » Tue Jan 12, 2016 7:16 pm

And for those unfamiliar with the https://en.wikipedia.org/wiki/Three_Laws_of_Robotics





I have little doubt that Howard, the head creative mind, and many other developers are amply familiar with Asimov, dike and the variety of classic Sci Fi literature that deals with the themes. Agree, the three Laws ethos is absent from the Institute and it is precisely the problem with them.

User avatar
Farrah Barry
 
Posts: 3523
Joined: Mon Dec 04, 2006 4:00 pm

Post » Tue Jan 12, 2016 6:49 pm



Interesting that you should mention Asimov and the Three Laws of Robotics. Unless I'm misreading your statement, you also seem to be falling into the same trap as many before you. Many people tote the three laws as an ideal to base robots on, but completely miss a major point of the stories. The three laws are actually NOT good. The robots' understanding of the laws evolved. The first law states that the robot may not harm a human, or through inactivity, allow harm to come to a human. Many robots begin to adhere to the idea that the only way to fulfill the first law is to control humanity. In order to protect humanity, humanity must be prevented from harming each other. Oh, and a few casualties along the way is acceptable as long at it ensures the majority are protected.


In other words, if there had been an Asimov to come up with the three laws, we'd be right back to robots taking over the world.


Although, there is a note on a terminal in the General Atomic Galleria written by an Isaac.
User avatar
Horse gal smithe
 
Posts: 3302
Joined: Wed Jul 05, 2006 9:23 pm

Post » Tue Jan 12, 2016 12:01 pm

Well the whole three laws thing is nothing but a concept, it's not like it's a law of physics or anything. Another point to make is humans break laws all the time, what's stopping a synth or robot from breaking their program or having their program broken from a glitch or outside source like a hacker and forgetting said laws.
User avatar
Prisca Lacour
 
Posts: 3375
Joined: Thu Mar 15, 2007 9:25 am

Post » Tue Jan 12, 2016 7:33 pm



Of course it's just a concept and that the problem. The robots' understanding of that concept evolves. Robots are supposed to protect humanity. The only way to protect humanity is to protect them from themselves. The only way to protect humanity from themselves is to conquer and control humanity. It's a very reasonable and logical progression of thought. No hacking or breaking of programming is even needed. Despite the fact that they are now conquering and killing humans, they are still fulfilling the three laws. Their understanding of the concept of the three laws, and how to fulfill them, has simply changed.
User avatar
Your Mum
 
Posts: 3434
Joined: Sun Jun 25, 2006 6:23 pm

Post » Tue Jan 12, 2016 3:33 pm

The three laws were purposely flawed because Asmov was trying to write an interesting story. If the three laws were perfect there would have been no conflict in the stories at least as far as the robots were concerned.


I am sure the people at Bethesda are aware of the three laws and that was pretty much the point of the Robco Robot Emporium.


The FEV tests started well over 100 years prior to the start of the game and were supposedly integral to creating synthetic organs. This is based on the years provided in the research holotapes in the lab. This makes sense seeing as how FEV makes the carrier immune to further radiation damage.


The Warwick notes say that proof of the experiment has to be destroyed but doesn't specifically mention the family. This could be as benign as destroying any remaining test seeds and plants when Roger is recalled.


As I mentioned before, if synths have free will, then they are a threat but we might also consider them intelligent beings worthy of consideration and treated as unique individuals. If they don't have free will, then they're not nearly the threat that Maxson makes them out to be because they are a slave to their programming and represent the same level of threat as the wandering sentry bots.
User avatar
Leah
 
Posts: 3358
Joined: Wed Nov 01, 2006 3:11 pm

Post » Tue Jan 12, 2016 7:07 pm

In his later works, Asimov writes that robots invented a "zeroth" law covering humanity as a whole and not individual Humans as the first law does. Interweaving his robot stories with his Foundation series, he comes to a point where a "gen3-type" robot is the head of the secret Second Foundation which has been guiding mankind for centuries. The parallels between the Institute-Synths and Foundation-robots would be striking but for the fact that synths seem not to have built-in guidelines and the Institute was started by accident rather than by a visionary. Also, the foundation did not create robots.



As for how the Institute might run a society from its ivory tower, think about what a bunch of MIT or Harvard professors (which is where the Institute came from) might do. We really don't have to look farther than the last seven years. I will leave it there lest I devolve into politics.

User avatar
Darlene DIllow
 
Posts: 3403
Joined: Fri Oct 26, 2007 5:34 am

Post » Tue Jan 12, 2016 9:34 am


Well, in real life? No one else has stepped up to take over the job ;)



In a Sci Fi context like FO4 with "Synths," it would become more muddled I acknowledge. But the key thing is that our shared humanity necessitates all of us behave within proscribed guidelines (which vary dramatically between cultures it is true). If we assume that the Western humanistic philosphy of ca. 1945 was the starting point for the FO4 world, there is little need to invoke any exotic "moral standards" and it is perfectly legitimate to judge the behvior of actors in the game world in terms of those standards, which have subsequently become internationally recognized as "the" standard.



So in short: if you make a robot and it starts bossing people around, maybe even roughing them up because it asserts that it is superior, it will according to prevalent moral and ethical standards be put down and you will be held accountable. Being "accountable" is the flip side of being a "steward."

User avatar
Monique Cameron
 
Posts: 3430
Joined: Fri Jun 23, 2006 6:30 am

Post » Tue Jan 12, 2016 3:05 pm

So how long do you think the Institute can continue with Gen-3 (or higher) production before they inevitably get replaced, either by the will of the Director or through a Synth rebellion. It would seem unless they have fool proof method of detecting Synths that they are on an inevitable course of Synth replacement.

User avatar
Marilú
 
Posts: 3449
Joined: Sat Oct 07, 2006 7:17 am

Post » Tue Jan 12, 2016 8:28 am


I agree. They have opened pandora's box so wide it just seems like a matter of time before two, then three, then four, then 12 coursers decide "Screw these jerks. We're takin' over!"



I saw a talk at a conference probably about 10 years ago. The guy was a Ph.D. but not an academic, he worked for a Rand type of group. His talk was about artificial intelligence and what it boiled down to is: "In our lifetimes, we will have to deal with machines that think." I don't know how true that really is, but my question to him was "So what if they can think? If they do not care, then they will probably always be perfectly compliant little 'slaves.'"



This is the part of "artificial intelligence" discussions that so often gets completely overlooked; I don't think I've ever seen anyone other than me bring it up. A computer program can emulate thinking to some degree even today. It can maybe even learn, problem-solve, perform operations at astounding speeds, multi-task far better than any human ever will.



But at the end of the day, if that 'thing' doesn't have any desire, any emotion, any cares, it will just sit there. "Self-awareness" is a lot more than just computing power or cognitive ability. Perhaps in another 50 years when the current hot topics have been fully played out and the realization dawns that "Oh wow, we made really 'smart' but completely boring and apathetic things that are no threat to anyone unless a specific person tells them do something harmful" they will start to focus on helping the psychobiologists, the linguist, the ethologist, the developmental psychologists, etc. to actually understand what "emotion" is and how it really differs from "cognition" and then we might be on our way to making true artificial consciousness, or actually I think the better word would be artificial psyches.

User avatar
DAVId Bryant
 
Posts: 3366
Joined: Wed Nov 14, 2007 11:41 pm

Post » Tue Jan 12, 2016 6:31 am

"I met Shaun today. After months of searching, I finally found my son and he's now older than I am, in appearance. Taken from me for his blood, they've convinced him the Institute is in the best interest of mankind, though Shaun makes no comment on how this is to be achieved. 'Boogeymen' is often the word used to describe the Institute, and I can now see with my own two eyes why this moniker is fitting.



Stealing people for experiments and replacing them, and for what? Man may be self destructive, but no one deserves to be killed for the sake of an artificial replacement.



Shaun's words felt so empty. He allows me to leave my frozen crypt, but rather than send escorts to take me to him, he allows me to fight my way to him. The untold number of people I've killed, tricked, or hurt just to find my son. I didn't ask to be a part of this world, yet those actions will now weigh heavily on me for the rest of my life.



Yet, as much as all this hurts, and as much Shaun is to blame, he's my son and I cannot kill him. I hope he understands why I cannot join him, but I lost him twice now, so whatever fate he has brought upon himself is his burden to carry, even if it means I must lose my son for the third, and final, time and it looks like the Brotherhood of Steel will deliver this burden to him."


-entry from Violynne's diary

User avatar
Euan
 
Posts: 3376
Joined: Mon May 14, 2007 3:34 pm

Post » Tue Jan 12, 2016 12:52 pm


The program does seem to be inevitably doomed, since creating a 'race' of people who look human, who can even just mimic human emotions and then keep them as a slave labour force and cannon fodder for your military arm seems like your loading the deck for your own destruction.....sooner or later, it just seems a matter of time.

User avatar
Jason Rice
 
Posts: 3445
Joined: Thu Aug 16, 2007 3:42 pm

PreviousNext

Return to Fallout 4