Your stance on android rights? #2

Post » Wed Dec 02, 2015 10:10 am

I honestly think this is one of the most interesting implications posed by the innate human belief in agency. I assume by this you mean that if you saw a machine (i.e. flamethrower or even some sort of industrial equipment) set a human on fire, and you determined it wasn't "self-aware", you wouldn't feel an instinctual need to punish it? You might possibly act to prevent it from doing so in the future, but you would see it as an object that's behavior is determined by physical variables and might provide value to you? But it seems clear that you assume the human is capable of making "choices", and would therefore instantly condemn the human to destruction. I think it's interesting how an ultimately incorrect assumption of human "self-awareness" and therefore agency compels us to treat humans in such a more severe and often counterproductive way.

User avatar
DAVId Bryant
 
Posts: 3366
Joined: Wed Nov 14, 2007 11:41 pm

Post » Wed Dec 02, 2015 1:13 pm

As the humble Prez of the Human Elitists, I give you all this inspirational video as you prepare for battle against the evil Androids.

https://www.youtube.com/watch?v=A-yZNMWFqvM

User avatar
[Bounty][Ben]
 
Posts: 3352
Joined: Mon Jul 30, 2007 2:11 pm

Post » Wed Dec 02, 2015 8:47 am

This is indeed pretty much what I mean. But I find it particularly interesting because I do, intellectually, believe a truly sentient AI - or, should I say, an artificially built copy of a human brain - would in fact be a human being, no matter the way it was conceived or the 'form' it was put into. In other words, I do consider humans to be advanced organic 'machines' that can theoretically be reproduced with the use of advanced enough technology.

And yet... like I said, while I would not hesitate to permanently remove an 'actual' human, that I considered morally reprehensible, from society, I would still attempt to figure out what made the 'AI' act the way it did, and try to 'repair' it, rather than feel the immediate emotional need to take 'vengeance' upon it. I'm mostly talking about the hypothetical sentient AI's here, since it's pretty obvious 'punishing' a flamethrower would be a ridiculous notion. And now that I've thought about is some more, maybe my innate human superiority complex doesn't actually play a part in it, maybe, in my case, the thought actually stems from the idea that the very material humans are made of is in fact so much more fallible and far less malleable than whatever (probably sturdy) substance we would be making our potential androids out of? What I mean is, the hypothetical homicidal AI would be so much easier and so much more profitable to alter than the human flesh-based neural system. To simplify - the human psychopath is more than likely a lost cause, but the AI can be salvaged, precisely because of its superiority?

User avatar
Hella Beast
 
Posts: 3434
Joined: Mon Jul 16, 2007 2:50 am

Post » Wed Dec 02, 2015 10:43 am

You'd consider something human no matter what form it was in? Just because of a sentient conversational AI? Race is defined by genetic and biological characteristics. How do you relate an android to something of the human race?

User avatar
Alada Vaginah
 
Posts: 3368
Joined: Sun Jun 25, 2006 8:31 pm

Post » Wed Dec 02, 2015 7:52 am

Like I said, if it's brain worked just like a human's, I'd relate to it like I would to another human.

User avatar
Nikki Hype
 
Posts: 3429
Joined: Mon Jan 01, 2007 12:38 pm

Post » Wed Dec 02, 2015 7:43 pm

You said "an artificially built copy of a human brain - would in fact be a human being, no matter the way it was conceived or the 'form' it was put into"

Fact by what? Certainly not science. Making something that replicates something else does not in fact make it that thing. Like I said, Race is categorized by certain genetic and biological differences. An android, by race's definition, could not be part of the human race

User avatar
Ashley Tamen
 
Posts: 3477
Joined: Sun Apr 08, 2007 6:17 am

Post » Wed Dec 02, 2015 12:32 pm

Well, in fact, I said I believe it would be a human being, so obviously I'm talking about my personal perception, not scientific definitions. It would fit my personal definition of what being a human means.

User avatar
Ella Loapaga
 
Posts: 3376
Joined: Fri Mar 09, 2007 2:45 pm

Post » Wed Dec 02, 2015 5:02 pm

Oh alright. Just wanted to clear up that some are actually just making up what they believe constitutes the human race. Got it.

User avatar
Dalley hussain
 
Posts: 3480
Joined: Sun Jun 18, 2006 2:45 am

Post » Wed Dec 02, 2015 4:18 am

I said it in the first thread and I'll say it here. They serve me. I wonder if Threepio can do a decent job of making a plate of sausage and pancakes.
User avatar
Laura Ellaby
 
Posts: 3355
Joined: Sun Jul 02, 2006 9:59 am

Post » Wed Dec 02, 2015 11:56 am

How about something not human, distinct - yet equivalent and deserving of many of the same rights? That's how I'd define self-aware AI anyway.

User avatar
Michael Russ
 
Posts: 3380
Joined: Thu Jul 05, 2007 3:33 am

Post » Wed Dec 02, 2015 10:27 am

However there is a difference between a human and a robot.

Those robot's were specifically made by humans for some purpose, if one goes rogue then it should be deactivated or whatever. Not like they are actual living beings.

User avatar
Del Arte
 
Posts: 3543
Joined: Tue Aug 01, 2006 8:40 pm

Post » Wed Dec 02, 2015 6:37 am

That, I don't think humanity is capable of creating, so, no, I wouldn't consider it 'human' per se. But not being human does not mean being lesser.

User avatar
Barbequtie
 
Posts: 3410
Joined: Mon Jun 19, 2006 11:34 pm

Post » Wed Dec 02, 2015 7:00 pm

Maybe the appropriate term would not be "human", as in an organism that is genetically a member of Homo sapiens, but rather "person", as in an individual capable of human-like thought and entitled to similar rights.

And from a scientific stand point, you're thesis is wrong. "Making something that replicates something else does not in fact make it that thing." It can be important to distinguish between artificial and synthetic intelligence. In scientific nomenclature, artificial would be used to describe something that is created to represent a predefined object or process, while synthetic would actually be that predefined thing, but created through human intervention. Think of the difference between a substance like cubic zirconia and synthetic diamond. Cubic zirconia would be an artificial diamond, in that it closely resembles a diamond but is does not fit the criteria in its chemical structure. Alternatively, synthetic diamonds can be created in labratory settings that are carbon atoms arranged in a crystal structure, and therefore true diamonds.

It is impossible to know at this point whether the constructs present in Bethesda's Fallout are artificial or synthetic intelligence, but we do have several clues. First, the androids are called synths, suggesting the beings are computers designed to operate in a way identically to a human brain, but constructed by humans rather than iological gestation and devolepment. Second, Harkness in FO3 appears to be a synthetic intelligence. Even after a memory wipe, he still demonstrated human-like levels of intelligence: he was still capable of learning, operating and creating memories, suggesting the memory wipe performed on him was similar to the biological condition of amnesia.

It seems likely that these beings do fit a certain philosophical idea of persons, and their origins do not determine whether they deserve rights (much in the same way most would agree that clones, another variety of synthetic human, would deserve rights).

Lastly, I would caution your use of the word "race". It's nondescriptive and has a history of scientifically dishonest misuse. There is no biological definition of race, unless you are using it to mean species, which is a term with its own issues but would better represent the idea you seem to be communicatng.

User avatar
Romy Welsch
 
Posts: 3329
Joined: Wed Apr 25, 2007 10:36 pm

Previous

Return to Fallout 4