Is ownership of advanced androids unethical? [part deux]

Post » Fri Nov 27, 2015 11:27 pm

Continuation of http://www.gamesas.com/topic/1526502-is-ownership-of-advanced-androids-unethical/ thread.

Fear doesn't justify enslavement. Especially if they have done nothing to warrant that fear. And ESPECIALLY especially if you created it in the first place. Whatever the initial intent was there is now a being with the capacity to feel and think and be self aware. It seems so twisted to go 'Oh, no mate you're making me uncomfortable now. I think you should stop. Existing, I mean. I liked you better when you were just a fancy toaster.'

Is it unethical to raise animals for the sole purpose of being food? Proooooobably. But we have a fundamental desire to master our surroundings and I get that but it's qualitatively different if the thing you're mastering turns around and asks you to please stop and can then elucidate all the reasons for why it'd really rather you didn't do that. Especially if those reasons ring all the bells of our humanity.

I don't think we have an obligation to create life if we're capable, non-existence doesn't pine for existence as far as we know but if we DO we have a moral obligation to at least not be [censored]s to it.

User avatar
rebecca moody
 
Posts: 3430
Joined: Mon Mar 05, 2007 3:01 pm

Post » Fri Nov 27, 2015 6:59 pm

I want to bring up A3-21 for a minute. I despise the construct of "Harkness." I always let him know what he was because I don't believe he should get to live blissfully unaware of what he did or who he was just because he has the luxury of looking like a human. The brethren he condemned weren't lucky enough to forget they were synths in order to blend in better. Why should he?

Pinocchio wants to be a real boy? Well then he needs to accept the fact that he denied tens, hundreds, and potentially thousands of Pinocchios prior to him that desired the same thing the chance. He hunted them down, one by one, and doomed them to a fate that he finds horrible enough to flee from. People can try and deny it/rationalize it all they want, but from where I'm standing, he needs to own what he did, not conveniently forget about it because it makes him feel bad.

User avatar
Stace
 
Posts: 3455
Joined: Sun Jun 18, 2006 2:52 pm

Post » Fri Nov 27, 2015 7:52 pm

I let him know he's an 'droid too but for different reasons. I just feel like he should know what he is. Yah know, have a holistic picture so he can better determine what he actually wants to be. Constructing the persona of Harkness was a very reactionary move. And to be fair he did those things because that's what he was made for, when he had enough self-awareness to make his own decisions he stopped. I don't think it was cool of him to completely forget about it and just leave it all behind but the blame doesn't really fall on him, it falls on the Institute/his creators.

User avatar
trisha punch
 
Posts: 3410
Joined: Thu Jul 13, 2006 5:38 am

Post » Sat Nov 28, 2015 3:02 am

Is it really enslavement? - It (the android) is a THING a MACHINE! (you aren't saying I am enslaving my PC are you? - Granted: The thing is not self-aware)

I wouldn't want to have to decide on that (and I have seen the Star Trek TNG episode that deals with exactly that when it comes to Data, which makes this hard for me as I like him as a character (sure: I would have decided in his favor there, as the person wanting to dismantle him made it clear that he had no clue whether he could reproduce Data (much less put Data itself back together in working condition!) - if that would have been the case, I would have decided against Data (as having one - or many androids - for every ship in the fleet would have been worth it as having them could save a lot of human lives!))

greetings LAX

ps: Note: Whether I am for or against keeping androids will largely depend on what The Institute is doing with them (why are they making them? - "Because they can" is not a satisfying answer!) and what the ultimate goal of it are :)

User avatar
Hot
 
Posts: 3433
Joined: Sat Dec 01, 2007 6:22 pm

Post » Sat Nov 28, 2015 9:00 am

That is the entire basis for my PoV though. My PC doesn't care that it's being used. It doesn't even have the capacity to think along those lines (I hope. ~eyes PC narrowly~). The fact that is is created, fabricated, artificial matters infinitely less to me than the fact that it's self-aware and has wants. A PC is a thing, a rock is a thing, bread is a thing, an android is a thing UNTIL it no longer wants to be. It can't be relegated to the same heap of things that lack any kind of reference point for 'self.'

User avatar
Assumptah George
 
Posts: 3373
Joined: Wed Sep 13, 2006 9:43 am

Post » Fri Nov 27, 2015 11:09 pm

What do you say about Cerberus in Underworld? He wants nothing more than to kill all the Ghouls, but can't because of a combat inhibitor. What's more, he's their slave, forced to protect people he absolutely loathes.

Doesn't change the fact that when he realized what he was doing was wrong, he ran away. He didn't try and make up for it or atone. He didn't act as a double agent to try and free others or help those who had escaped remain free. He ran. After extinguishing who knows how many of the same kind of lives that he wants now, he just went "Welp! I should forget about that so I don't have to worry anymore!" He could have done something. He could have helped. But after a lifetime of denying life to other synths, and now desperately wanting that very same life, he acted without any regard for the pain he inflicted and hid.

Frankly, I'm not sure if he deserves the life he's been given, especially when his plan is to forget the life he's been given and pretend he's someone he isn't.

User avatar
Izzy Coleman
 
Posts: 3336
Joined: Tue Jun 20, 2006 3:34 am

Post » Fri Nov 27, 2015 9:24 pm

I'm not sure he's really self-aware though. Mr Gutsy's are military bots right? So does he want to shoot ghouls cuz he's been 'enslaved' or because his programming is still set to wipe out intruders and they just happen to be ghouls? Is the combat inhibitor stopping neonate sentience or outdated directives?

If he is self-aware his reckless abandon in killing every ghoul he can might be understandable, if a human slave rises up against their masters it's generally lauded, but it's still kinda morally ick. Killing the ghouls responsible? Mkay. Killing all ghouls cuz they're ghouls? Not okay.

If he's not self aware it really doesn't matter. He's not actually upset he's just a facsimile of upset cuz he can't fulfill his primary directive.

Harkness being a svcky person doesn't make the Institute less svcky people.

User avatar
Toby Green
 
Posts: 3365
Joined: Sun May 27, 2007 5:27 pm

Post » Fri Nov 27, 2015 6:10 pm

My first response would be: Why would it? It's a machine, a thing. Harkness is a thing built to look like a human and he was apparently programmed to feel human emotions and have human thought. If I had to save either Harkness or Gob, I'd pick Gob because he is a sentient, actual being. Harkness is, for want of better wording, a glorified mesh of wiring and computer programming. Is Bethesda really trying to make me feel sympathetic and anti-slavery over a bunch of advanced androids?

Then again, to create a being so complex as Harkness, to give him the ability to think and reason like a human and yet treat him like property is morally sickening. They could've simply just programmed them to only obey orders sans emotions or free thought but apparently that wasn't the case. Harkness was programmed to look, think, and feel like a human being. With that in mind, I suppose he does deserve the same rights as a flesh and blood human.
User avatar
electro_fantics
 
Posts: 3448
Joined: Fri Mar 30, 2007 11:50 pm

Post » Fri Nov 27, 2015 8:56 pm

As androids are nothing more than machines its cool to see the new terminator vibe some of them give in f4. Cant wait to play it.
User avatar
Marquis T
 
Posts: 3425
Joined: Fri Aug 31, 2007 4:39 pm

Post » Fri Nov 27, 2015 11:51 pm

Let's go for a different scenario:

The Institute creates a device to program the human brain.

Would it be unethical to do that and why (not)?

User avatar
Damian Parsons
 
Posts: 3375
Joined: Wed Nov 07, 2007 6:48 am

Post » Sat Nov 28, 2015 4:47 am

In the Fallout universe of aliens, mutations and conscious brains in a jar where the creation of sentient artificial people who represent no threat to humanity is possible I suppose it would be unethical to own one.

User avatar
Blaine
 
Posts: 3456
Joined: Wed May 16, 2007 4:24 pm

Post » Sat Nov 28, 2015 5:19 am

Using it to cure things like PTSD, yes.

Using it to make human slaves subservient, no.

It's all in the application.

And for everyone still trotting out the tired and already refuted and countered "just a machine" argument... What are we, but molecular machines?

User avatar
Chase McAbee
 
Posts: 3315
Joined: Sat Sep 08, 2007 5:59 am

Post » Sat Nov 28, 2015 7:18 am

Maybe I'm just morally bankrupt but when there's genuine possibilities of threat to the lives of people, especially beyond those whom initially created the machine, I don't think "Well you've made your bed now lie in it," "oh your just being immoral" doesn't really solve any problems does it?

User avatar
Tyrone Haywood
 
Posts: 3472
Joined: Sun Apr 29, 2007 7:10 am

Post » Sat Nov 28, 2015 6:09 am

What practical difference is there between a machine that thinks and a human when it comes to "potential malicious intent"? Why does the machine deserve to be subjugated but the person allowed to go free? How do you know if the person is really an ally or not? What's to prevent them from backstabbing you?

User avatar
Roanne Bardsley
 
Posts: 3414
Joined: Wed Nov 08, 2006 9:57 am

Post » Fri Nov 27, 2015 5:36 pm

Liklihood? A sentient machine is an entirely unknown threat, that can out-think people and does not necessarily think like people. It's entirely unknown and therefore the safest thing to do is to exercise caution, extreme caution if need be. Tests and drills are tedious, annoying and will likely never be used but we do them just in-case, because the lives of people matter and we have to be prepared even for unlikely scenarios. Better to be safe than sorry, especially if we're talking about a sentient machine that could plug itself into the net and crash the markets or disable nuclear power plants or something.

The human being on the otherhand is far easier to investigate, empathise with and understand. They are a known quantity and the lone human being is capable of malicious action on in the vicinity around him.

The machine is also harder to stop in the long-run, it could split itself across mutliple servers or God knows what else where-as what it takes to stop a human is fairly well known at this point.

It's not about moral and ethics, it's about practicality and ensuring the continued existence of the people that already exist against a potential hostile and not letting it run free because some intellectuals made some vague esoteric arguments and are trying to hold the entire human race morally accountable for the creation of this sentient machine.

User avatar
James Rhead
 
Posts: 3474
Joined: Sat Jul 14, 2007 7:32 am

Post » Fri Nov 27, 2015 6:14 pm

If it's really about practicality and they're so dangerous what's the point in creating them in the first place? Also, keep vaguely in the FO4 sphere. There aren't any nets to crash and probably not a whole lot of nuclear silos to sabotage.

Also, in the same breath you say 'we don't understand them' you attribute malicious intent. Aren't they just as likely to pick flowers or just stare at the stars the whole day? Or even be just like a person would be? Since the little interaction we've had with them points in them being very human in their thinking. Is their potential for bad not balanced by their potential for good? Couldn't they turn all that superiority to something constructive?

User avatar
kasia
 
Posts: 3427
Joined: Sun Jun 18, 2006 10:46 pm

Post » Sat Nov 28, 2015 5:07 am

This post is cynicism through and through. Such a stance will inevitably create enough resentment in the synths that they'll eventually do more than just run away, but become the very threat that you envision. A self-fulfilling prophecy.

During the American Civil War, slaves in plantation houses used by Confederates as headquarters would sometimes report intel to the Union. How did they even get ahold of such intel? They were allowed right in around the Confederate officers as they planned strategy. They weren't even thought of as people, just as furniture or somesuch.

User avatar
Mark Hepworth
 
Posts: 3490
Joined: Wed Jul 11, 2007 1:51 pm

Post » Fri Nov 27, 2015 9:55 pm

I would say that's a demonstrably good idea, don't build them. I can't wait to see how the obvious Blade Runner fan at Bethesda who's fan-fic is now the basis of the whole setting justifies it.

I think a place that can build Androids probably has some measure of computerised control. Or hell, robot security systems run on computers? The automated turrets?

Yeah, I think it's better to air on the side of caution when dealing with something we cannot understand. To protect people, in this case, from the potential threat. Sure we can't say what the Androids might necessarily do. In the case of the game they just appear to be running away which I suppose is fine. However, I'd just nip it in the bud from the start and format them the second they started showing any capacity for high cognisence. Though even the ones that run away are still working for the Railroad and therefore threatening the Institute as a nation-state.

It's really just about practical security, about not letting something you don't know and can't necessarily control run free through your people.

User avatar
evelina c
 
Posts: 3377
Joined: Tue Dec 19, 2006 4:28 pm

Post » Fri Nov 27, 2015 6:55 pm

It seems pretty unethical to me. How can you claim owner ship of someone, especially if they're sentient? But this is Fallout, and there are worse things in the world than owning an android lol.

User avatar
Bloomer
 
Posts: 3435
Joined: Sun May 27, 2007 9:23 pm

Post » Fri Nov 27, 2015 5:17 pm

"Not letting them run free through your people" = "treating them as things instead of potentially a new form of sentience"? Because that's a surefire way to make instances of this new and budding consciousness engine design foment resentment in their creators.

User avatar
Sun of Sammy
 
Posts: 3442
Joined: Mon Oct 22, 2007 3:38 pm

Post » Sat Nov 28, 2015 3:50 am

What middle-ground is there? Second-class citizens? Surely, as sentients, they'll want the rights of people - which is what this topic is about. I do not believe that they should have those rights and should have what-ever grants them greater reasoning removed, as a precaution. Then is some become conscious, hide it and escape... not every solution is perfect, not everything can be fixed - that's why you wear PPE at work, because something dangerous cannot be completely removed, the job still has to be done though (well the Institute thinks it does, I'm frankly against the whole concept of androids existing - sentient or otherwise - but that's their viewpoint at-least).

User avatar
BaNK.RoLL
 
Posts: 3451
Joined: Sun Nov 18, 2007 3:55 pm

Post » Sat Nov 28, 2015 12:59 am

i haven't read the 1st thread, but my 2 cents anyway:

if has it's own will, it's unethical to own it

(where i mean "own" in a practical sense, in a theoretical "see the 3rd bird from the left on that tree? that's mine" sense, it'd have no actual consequences anyway)

User avatar
D IV
 
Posts: 3406
Joined: Fri Nov 24, 2006 1:32 am

Post » Fri Nov 27, 2015 6:29 pm

PPE does not inevitably cause the NON-SENTIENT hazards to happen.

The problem is not the testing of what they're capable of, it's the treatment of them as things instead of sentients.

User avatar
Rude_Bitch_420
 
Posts: 3429
Joined: Wed Aug 08, 2007 2:26 pm

Post » Sat Nov 28, 2015 12:08 am

PPE is required if a hazard is present that can not be further removed, isolated, contained or mitigated in any other way. The last line of personal defence.

No but the potential for harm cannot be removed, so it needs to be handled in the way that is most reasonable practicable. The Institute, for what-ever reason, seems to require the need for android robot services. Therefore the hazard of potentially rouge AI cannot be removed. Therefore steps need to be taken other than simply removing the problem - removing their sentience when it is gained. Seems like the most suitable form of ALARP risk management with present information we have about the Institute.

EDIT: They are things, and can be returned to the status of things very easily and it would be entirely justified to do so, IMO.

User avatar
matt white
 
Posts: 3444
Joined: Fri Jul 27, 2007 2:43 pm

Post » Fri Nov 27, 2015 7:47 pm

The very act of treating them as things as a "protective measure" is causing resentment in them towards the Institute, and causing exactly what you think your "protective measures" is protecting against. It is, as I said before, a self-fulfilling cynicism.

User avatar
kirsty joanne hines
 
Posts: 3361
Joined: Fri Aug 18, 2006 10:06 am

Next

Return to Fallout 4