Gen 3 Synth Self-awareness discussion

Post » Tue Mar 08, 2016 8:21 pm

We know that some Gen 3 Synths appear to have self-awareness, while other Gen 3s appear to lack it.



Do we think self-awareness is innate in some Synths and not others? Are some Synths physically predisposed to become self-aware because of random variance in their manufacture?



Or do we think self-awareness is emergent with lived experience? Does a Synth typically attain self-awareness after living a certain amount of time and are we talking wetware age or are we talking software age? If every Gen 3 Synth in service was factory reset monthly would self-awareness continue to emerge or would it cease?



As follow up questions, would a Synth kept in isolation with no social exposure attain self-awareness or does such phenomenon require social contact. These are the kinds of questions that Institute research should answer, but have been left to us to theorize.



What say you forum dwellers?

User avatar
Brandon Bernardi
 
Posts: 3481
Joined: Tue Sep 25, 2007 9:06 am

Post » Wed Mar 09, 2016 2:20 am

I'm sure this thread will end in though-provoking, well-written discussion

User avatar
Jeneene Hunte
 
Posts: 3478
Joined: Mon Sep 11, 2006 3:18 pm

Post » Wed Mar 09, 2016 2:08 am

where did you see any gen3's that lack it?



and for your other questions, we can assume awareness goes along with (not saying necessarily depends on) hardware configuration, since we have plenty evidence hat humans, all born with pretty much the same hardware configuration, will eventually become aware once they're "turned on".


i don't see why this should be different for synths. if you have hardware designed so it can become aware and turn it on, it will. no matter if that's inside a shoe box, in outer space or in tijuana. if synths operate anything like humans anyway, they'd, in your follow up example, become depressive, insane and sooner or later actually die, this has been proved to happen in experiments that have been conducted by some people in the 19th century iirc where new borns would be grown up in total isolation because they wanted to see if they'd develop some "inborn, natural human language"


what would be a far harder question to answer in my eyes would be, what "aware" actually means. it's kind of a silver screen term you know, everybody kind of reads into it pretty much whatever they please :-)

User avatar
Tracy Byworth
 
Posts: 3403
Joined: Sun Jul 02, 2006 10:09 pm

Post » Tue Mar 08, 2016 2:58 pm

I'm sorry did you say something?







I was too busy destroying toasters.

User avatar
herrade
 
Posts: 3469
Joined: Thu Apr 05, 2007 1:09 pm

Post » Wed Mar 09, 2016 2:55 am

In respect to Gen 3s lacking self-awareness I refer to the majority of the Gen 3s who appear to be cheerful and mindlessly obedient, lacking independent thought and drive, vs those that start thinking about their own personal wants and desires.



The comparison to humans becomes muddied when we factor in the ability to perform mind-wipes and resets. If you were able to re-image a baby human with its mind on the first day of life would it ever become aware?

User avatar
Add Me
 
Posts: 3486
Joined: Thu Jul 05, 2007 8:21 am

Post » Wed Mar 09, 2016 12:56 am

yeah, just like with humans, and you'd not deny THEM awareness ,-)


no, seriously, you can talk to a couple of them at the institute, and they pretty much explain that they _want_ to remain at the institute, while it's only human scientists that deny theilr awareness. (can't 100% exclude i didn't get anything wrong there in my 1st game though, and haven't yet entered the instiute in my second to check)



the problem, from the institute's view, i think isn't synths growing "aware" (which i think for them is just an academic question not directly related to synths wanting to escape), but wanting to be independent.


you'll get institute scientists that tell you synths a) are not aware and B) it's the rr that kind of "programs" them (dunno) their escape wish. it's this wanting to escape that bugs them, not so much WHY they'd want that



btw, one of my best laughs was one bos mr gutsy onboard the prydwyn:


"machines with free will? my programming says that's an abomination!" :-)))


(even though he meant it as an insult towards curie - if he knew what type of MIND operates this particular synth body...! :-)===)




why, there's been countless examples of human mind wipes and resets (keyword cold war a.o.)




sorry, can't seem to parse the question.


if you're asking if, if i built a "newborn" from scratch, it'd grow conscient, hard to tell, development of the brain doesn't start (or end) with birth but pretty much with conception, so i guess if you could bring that, uh, organism to actually *live*, it'd likely grow conscient, but not necessarily on an adequate stage of development. baby brains are pretty flexible though, and a newborn's not yet fully self aware anyway, so it might as well do fine.

User avatar
Betsy Humpledink
 
Posts: 3443
Joined: Wed Jun 28, 2006 11:56 am

Post » Wed Mar 09, 2016 3:11 am

I'm sure it will end with people talking about toasters and toilet paper having rights like every other time.

User avatar
stacy hamilton
 
Posts: 3354
Joined: Fri Aug 25, 2006 10:03 am

Post » Tue Mar 08, 2016 1:08 pm

What I was getting at was if you could revert an infant's mind to it's day 1 state (as occurs when a Synth is reset), would the infant be able to become self aware? Of course that question is predicated on the idea that a Synth on day 1 is not yet self-aware and acting only according to program.



When does a loaded ROM of pre-programmed behaviour stop being that and start being a person?

User avatar
Rachell Katherine
 
Posts: 3380
Joined: Wed Oct 11, 2006 5:21 pm

Post » Tue Mar 08, 2016 2:09 pm



Smart man.
User avatar
LittleMiss
 
Posts: 3412
Joined: Wed Nov 29, 2006 6:22 am

Post » Tue Mar 08, 2016 2:11 pm

you got that wrong about the mind wipes.


they're not "resets".


they're a kind of brainwash overriding their memories with artificial ones to reduce their risk of getting caught (and they're optional, the rr doesn't force them onto their clients, but most decide to have one, this is clearly stated on multiple occasions both by rr and synth clients)


anyway, the never _stop_ being aware during the process (not in a more severe way than, like, sleeping anyway), they sit in the holochair thingie and watch a vid basically.


so there never is anything like a tabula rasa or empty mind or zero state or something in the first place


and yes, i say your infant would become self aware (if you can get it to be technically fully operable anyway :-). simply because it's whole setup is made to host consciousness. it's in our genes our soul or whereever your respective beliefs like to put it, so if the host is operable, it'll inevitably awake.




one thing i actually never was able to find out or even get a clue about:


what hardware does a synth mind run on?


brains?


computers?


biocomputers?


?


any clues anybody?



....and one more thing: there's one thing that always struck me totally odd about the institute and their whole interior "are they aware" debate - so, if they're that top notch science cracks capable of creating life from scratch and wires - how on earth can it be they didn't, in all the time they must've worked on all this, too - manage to come up with something like a friggin' TURING TEST????? i mean come on really... :-)

User avatar
Rusty Billiot
 
Posts: 3431
Joined: Sat Sep 22, 2007 10:22 pm

Post » Wed Mar 09, 2016 3:34 am


People often misunderstand this, but passing a Turing Test doesn't mean a robot is fully sentient and self-aware. Its just that a human can be fooled into thinking they are in a conversation with a human, when they actually aren't.



The easy counter to a machine passing the Turing Test being considered "fully intelligent" is the Chinese Room thought experiment. The Chinese Room is a hypothetical wherein an individual is placed in a locked room with access to a Chinese/English dictionary. If someone slips a paper under the door and asks this person within the room to translate a word from Chinese to English, and the person uses the dictionary and slides the correctly translated word back under the door, then the question follows: Does the person inside the room understand Chinese or is he just spitting out the translation without actually "Understanding" anything. To the person on the other side of the door, the question is basically impossible to answer. Because for all he knows, the person on the other side is actually fluent in Chinese.

User avatar
lucile
 
Posts: 3371
Joined: Thu Mar 22, 2007 4:37 pm

Post » Tue Mar 08, 2016 2:56 pm

Gen 1+2 = Terminators


Gen 3 = Arnold Terminator learning to smile :)



lol :D

User avatar
Nathan Barker
 
Posts: 3554
Joined: Sun Jun 10, 2007 5:55 am

Post » Tue Mar 08, 2016 4:32 pm

Not to nitpick but it does seem like there is a tabula rasa empty state during the process, as seen with the Synth that would then come to house Curie. Where the mind erasure worked but the mind upload failed, leaving a vegetative body.



As for what hardware a Synth mind runs on, I think it's apparent that they have grey matter with electro-mechanical "implants", otherwise known as the Synth components. Basically it appears Gen 3 Synths are 3d printed cybernetically enhanced humans. Their flesh is human, has DNA that is indistinguishably human (the Brotherhood would have noticed when taking DNA samples if one was not human), and the only non-organic part that appears to be found on them is the Synth component.

User avatar
clelia vega
 
Posts: 3433
Joined: Wed Mar 21, 2007 6:04 pm

Post » Wed Mar 09, 2016 2:08 am

The Institute scientists are in denial because they don't want to stop using the Synths as slaves, but if Synths were self aware and sentient, then there would be moral and other tough implications so it's better just to say that they are only machines running on programming that makes them "seem" to be self-aware. However there are Institute scientists that think Synths are sentient beings, but they are in the minority and by the conversations you overhear, they are told to drop the subject.



The human brain also can be wiped, as with amnesia, but we don't know how to do it yet or recreate amnesia without causing brain damage. Most of the higher functions of the brain are learned, but babies have innate knowledge of certain things like depth and height. A baby crawling near an ledge somehow knows that falling off is dangerous.



Basically, I don't see much difference (other than hardware) between Gen 3 Synths and humans. They are both sentient, intelligent, and self-aware beings. The fact that no one can tell a Synth from a human is just further proof. Synths are programmed, but so are humans. What do you think school is? And even beyond school, norms and behavior is learned so that humans know how to behave in society. The spitting experiment shows this. Even though saliva is sterile, humans don't like drinking a beverage or eating food that someone else has spit on. This is learned behavior.

User avatar
Steve Fallon
 
Posts: 3503
Joined: Thu Aug 23, 2007 12:29 am

Post » Tue Mar 08, 2016 9:33 pm

Reading an entry about a certain "synth plant" in the SRB, it seems that they become more self aware while out and about or on "mission". It was noted that said synth wanted to be a courser when finished with their mission, but it was said that in indicating it wanted this position it was becoming too self aware and was denied being a courser. Along with another reason that was a bit comical.

User avatar
Robert Jr
 
Posts: 3447
Joined: Fri Nov 23, 2007 7:49 pm

Post » Tue Mar 08, 2016 10:53 pm

hm ok, valid. so, do we have anything BETTER than a turing test actually?


i mean we don't even have anything like a commonly agreed definition of what "awareness" or "conscience" actually _is_, or do we? (none i'd heard anyway)




ok now though, between "being sentient and self aware" and "impossible to say if it's self sentient and self aware" - is there really any difference? or _where_ is an actual difference?


i mean, honestly, how do actually have any real means to tell for OURSELVES if are actually sentient and self aware, and not "programmed" to think we are? (which doesn't even need any "creational design" approaches, should you think that'd be where i'm going (it's not), but thought could as well be just a circumstantial reflex like pain or whatever) except that it's ourselves who define the terms in the first place of course. or, WOULD define the terms, if we actually had any valid definitions (i knew of) :-)




well that's actually even inferior to a turing test in matters of gaining information, kind of a "turing test light"


since i don't have to assume they're capable of chinese, i could just as well assume there's some dude with a dictionary on the other side :-)


and within that test, there's no way i can gather further information. to really be able to tell anything, i'd have to, like, ask stg contextual, and then it'd be back to a turing test more or less



what i actually think though is, that the whole question ("how can we tell...") is kind of obsolete (other than "how can we prove"):


so ok i came up with that neuronal network ai thingie that's capable of learning and communicating in language and all.


i say _I_ came up with it so i can, in this scenario, just ignore any "there might be programmed functions to say that" assumptions, because i _know_ i didn't add them.


so, if i have that thing running for some time, chat with it and stuff, and some day, without me asking it or suggesting anything, it said "hey, btw, it just occured to me the other night: i am! isn't that odd?" - what reason would i have to doubt this?`:-)

User avatar
Rowena
 
Posts: 3471
Joined: Sun Nov 05, 2006 11:40 am

Post » Wed Mar 09, 2016 2:24 am

so gen3's basically are kindergarten cop? :-)

User avatar
Laura Elizabeth
 
Posts: 3454
Joined: Wed Oct 11, 2006 7:34 pm

Post » Tue Mar 08, 2016 2:34 pm

One thing to point out is that some people draw the line between Synth and person at the ability to erase and reload memory into a Synth but not a human.



However in canon we actually see humans who get their memories erased over and over. So if a Synth is not a person because of memory tampering, what would that make the inhabitants of Vault 112?



But I digress, that's off-topic. Although an interesting segue, I wouldn't argue the inhabitants of Vault 112 were not sentient and self-aware... I don't think anyway... now I'm having doubts.

User avatar
Charlotte Henderson
 
Posts: 3337
Joined: Wed Oct 11, 2006 12:37 pm

Post » Tue Mar 08, 2016 8:46 pm


Exactly. That's the entire point. How do you know if the guy knows Chinese or if he's just spitting out a translation using a dictionary? Without looking behind the door, you can't know.



The anology then follows, how do you really know if an AI program is "thinking" and "feeling" or is it simply spitting out output from a designated input?



The computer scientist John Searle famously posited this thought experiment as a counter to the idea of "Strong AI." Basically, the Chinese Room illustrates that using a Turning Test to prove the 'intelligence' of an AI program is pointless and insufficient as evidence.







There is no surefire solution to the Chinese Room problem, no.


User avatar
Elea Rossi
 
Posts: 3554
Joined: Tue Mar 27, 2007 1:39 am

Post » Tue Mar 08, 2016 6:12 pm

We have multiple self aware mr handys with distinct personalities, there was a self aware sink.
User avatar
Mariaa EM.
 
Posts: 3347
Joined: Fri Aug 10, 2007 3:28 am

Post » Wed Mar 09, 2016 3:24 am

Well, are they really self-aware, or do they behave like they are? Big difference. For all intent and purpose, the synth component is there to do one thing, emulate human behaviour. The synth component is programmed by humans, to emulate humans. I would think, that learning by observing, is also a thing. Like super-computers can learn from their mistakes, by not making them again, or at least avoid the circumstances, which lead to said mistake (like Deep Blue for example) But, if the feeling of self-awareness is programmed, to make them as life-like as possible, they are still nothing else, than a highly advanced machine. An organic toaster with an advanced computer component.



Hope it makes "sense" a bit tired here :)

User avatar
jeremey wisor
 
Posts: 3458
Joined: Mon Oct 22, 2007 5:30 pm

Post » Tue Mar 08, 2016 10:01 pm

well i guess we'd better come up with stg acceptable soon then, cuz i'm sure google's working on it :-)

User avatar
Anna Beattie
 
Posts: 3512
Joined: Sat Nov 11, 2006 4:59 am

Post » Tue Mar 08, 2016 2:31 pm

what a depressing moment of insight...


I AM!

...a sink.


:-D

User avatar
john palmer
 
Posts: 3410
Joined: Fri Jun 22, 2007 8:07 pm

Post » Tue Mar 08, 2016 11:21 am



By that logic nothing is aware, being meat based machines ourselves.
User avatar
Benjamin Holz
 
Posts: 3408
Joined: Fri Oct 19, 2007 9:34 pm

Post » Wed Mar 09, 2016 2:42 am


There clearly is a line to be drawn.



Are AI personalities like Siri self-aware? Of course not.



Are Mr. Handy's? Probably not on the whole, but maybe some are like Codsworth.



Are Gen-2's?



Are Gen-3's?




Who draws the line and where? When does "using your personal robot assistant" become "enslaving a sentient being?" All the Chinese Room experiment shows is that there really is noway to set that line with absolute 100% certainty.

User avatar
Misty lt
 
Posts: 3400
Joined: Mon Dec 25, 2006 10:06 am

Next

Return to Fallout 4