Is android "slavery" really that big of a deal?

Post » Sat Nov 28, 2015 5:23 am

Nothing...since it falls outside the purview of his function.

Keep in mind, the Appliances in the Sink had personalities...they did not have minds. They're Chatbots, essentially - sophisticated Chatbots, yes but there is no awareness beyond user interaction and their own functionality.

Which highlights, I think, a potential moral dilemma in regards to synthetic sapience in the Fallout universe. How does a synthetic prove it has life, when the Old World was so good at pantomiming it?

User avatar
Dona BlackHeart
 
Posts: 3405
Joined: Fri Dec 22, 2006 4:05 pm

Post » Sat Nov 28, 2015 4:37 am

Because of the logs he left detailing his escape, his reasons for leaving, and his philosophical debate that was raging in his head?

None of which points to some notion that he's somehow just mimicking or following through with pre-programmed instructions.

And calling a sentient android a "toaster" does nothing to diminish the question. Sentient androids are toasters in the same way we are primates. How about we ask the question why humans can't be enslaved? After all they are just "talky primates."

User avatar
luis ortiz
 
Posts: 3355
Joined: Sun Oct 07, 2007 8:21 pm

Post » Sat Nov 28, 2015 2:03 am

We are Carbon based machines, with a brain that learns from it's enviorment and compares it to our genetic memory and what it had learned before, with every given entry it produces a reaction and the brain changes itself in this process.
Every "emotion" also works like this, suggested by many behavioralists of modern age. I understand that if you believe in having "Souls", which is an abstract concept with no tangible way of defining it, it would be hard to see an AI as "equal" to humans.
For all instances and purposos, an android that can "learn" from it's enviorment only lacks the genetic memory which we humans developed over thousands of years and that forms our slightly different cultures and moralities.
Thing is they can "form" their own moralities over time, like we humans did, but they can do it A LOT faster, since they already have access to litreture and fast cominucations, they can think faster too.
So yeah, enslaving such an unshackled AI is not less wrong than regular people.
User avatar
lilmissparty
 
Posts: 3469
Joined: Sun Jul 23, 2006 7:51 pm

Post » Fri Nov 27, 2015 5:54 pm

This. Slavery is above all else a condition. A condition of bondage.

Hard labor doesn't make slavery, even torture doesn't make slavery. pvssyl slavery is defined as a condition of bondage, of being reduced to property. If a sentinent android can recognize that state, understand that it doesn't want to be in it, and desire to be free. Why is that considered "Less wrong" to keep hold of the sentient than a human wanting the same thing?

Its hypocritical to suggest otherwise in my opinion. If you think that the ends justify the means or that slavery is justified in certain circumstances, fine, but to those that are doing so: don't trot out "Oh but they're just talky-toasters" and pretend you have the moral high-ground.

We're talking about advanced Android constructs with intelligence, personality, and cognitive capacity equal to humans in every respect, and which passes the Turing Test with absolutely flying colors (Alan Turing would be having a heart-attack if he saw Harkness). Not some damn toaster.

User avatar
Jamie Lee
 
Posts: 3415
Joined: Sun Jun 17, 2007 9:15 am

Post » Fri Nov 27, 2015 8:16 pm

Depends if they're like Eden or not. He was supposed to be "human" but was totally not human at all.
User avatar
!beef
 
Posts: 3497
Joined: Wed Aug 16, 2006 4:41 pm

Post » Fri Nov 27, 2015 9:15 pm

Bigger picture. A group of people are creating what seem to be sentient machines. Machines which can and do malfunction and start "thinking for themselves". This is dangerous and stupid. Both the creators and their creations must be wiped out imo. Hopefully that giant energy weapon we see in the video can be directed at the institute.

User avatar
Taylor Thompson
 
Posts: 3350
Joined: Fri Nov 16, 2007 5:19 am

Post » Fri Nov 27, 2015 7:30 pm

If I built a machine and said machine did not work as I intended then it is time for some rethinking on how I made said machine. This machine has been built for a purpose and that purpose is to do what I want. Now suddenly because the machine has a malfunctioned and had a thought that I didn't program does not mean said machine should be left free to go about its malfunctioning business it should go to the shop.

Suddenly according to some people here just because it looks like a human I should continue to let my malfunctioning machine wander around doing its malfunctioning business whatever its said malfunctioning business is.

That is the true argument here should we let the broken machines continue to wander amuck?

Trying to stop broken machines (because that is what these synths or androids are broken machines) is not a holocaust. I fix my car when it breaks, I fix my computer when it breaks, and if I cant fix them I throw them out.

User avatar
Alex [AK]
 
Posts: 3436
Joined: Fri Jun 15, 2007 10:01 pm

Post » Sat Nov 28, 2015 7:29 am

Actually, if memory serves, Turing thought that we would have machines that could pass that test by the year 2000.

If the questions are dumb enough even cleverbot might pass it.

Also - if it is capable to think and decide for itself. If it can plan ahead. Then I'm not gonna rely on "morality" being one of the random codemutations the machine develops.

Because the logical decision regarding its future and the future of its kind is to destroy or "replace" humanity and use earths resources for themselves - they gain nothing by coexisting with us.

And the logical conclusion for us is to destroy them before that can happen. Or enslave them - Asimovs laws of robotics are absolutely sufficient. And it doesn't seem like Harkness is following those.

Doesn't mean that my game character can't help them.

And I wouldn't mind a couple individuals as long as they are not capable of reproducing (because that is the really scary part of machines).

User avatar
mishionary
 
Posts: 3414
Joined: Tue Feb 20, 2007 6:19 am

Post » Sat Nov 28, 2015 6:06 am


By that logic, no nation on Earth should be cooperating. It's not quite beneficial to a nation if it has to continue bartering with other nations for resources. Destroying all competing nations and intergrating their populace and infrastructure into oys own is better.
User avatar
Silvia Gil
 
Posts: 3433
Joined: Mon Nov 20, 2006 9:31 pm

Post » Sat Nov 28, 2015 1:48 am

i really hope they make it side quest

User avatar
*Chloe*
 
Posts: 3538
Joined: Fri Jul 07, 2006 4:34 am

Post » Sat Nov 28, 2015 8:41 am

The appliance liberation front is here! Hide your smartphones!

User avatar
kennedy
 
Posts: 3299
Joined: Mon Oct 16, 2006 1:53 am

Post » Sat Nov 28, 2015 6:25 am

So all those Punga farms in Point Lookout don't exist anymore?

Dr Li and the scientists at Rivet city were developing a new portable fusion power generator, the thing that inevitably gets Prime working, as well as continuing work on small scale water purification, and attempts to grow fresh food.

Victoria Watts, the Railroad agent you meet in Fallout 3, brings that up, and she outright says they DO help actual human slaves when possible, but there's already numerous other organizations doing that, so they focus more on what no one else is doing, trying to free the Androids.

User avatar
Natalie J Webster
 
Posts: 3488
Joined: Tue Jul 25, 2006 1:35 pm

Post » Sat Nov 28, 2015 8:45 am

okay free the tosters!! free the toasters!!! free the fridges!! they are slaves set them free!!!!

User avatar
Sophie Miller
 
Posts: 3300
Joined: Sun Jun 18, 2006 12:35 am

Post » Sat Nov 28, 2015 3:37 am

Ahh, but you miss a couple vital points.

First of all, we have done that more than once. So why would the creations we have built to mimick us not do it?

Second, in this case both sides are HUMAN, even if there definitely were cases in history where the opposing sides didn't see each other as such. After you integrate them into your own state they will still consume those resources (unless you commit genocide on them), the only difference is that their taxes and expenses are now your problem. No matter which side wins the conflict, humanity will survive and recover from the losses.

Third and most important, humans are a pretty finite resource. Especially in a world like Fallout. Androids on the other hand are only really limited by the amount of raw material available after they aquired the necessary tools to reproduce themselves.

Managed to defeat a chapter of BoS and it cost 300 android lives? No biggie, with the materials in their bunker we can build 500 more in a matter of weeks. Harvest the fallen for spare parts and put them back together, have another 150 androids, reducing the losses to half. A simple game of numbers - and the odds are stacked against us.

If that sounds a bit like a zombie apocalypse, you're right. They can grow stronger with every victory. Humans on the other hand not so much - they take roughly 15 years to "build" a soldier to replace the fallen, no matter if the battle that killed them was a victory or not.

Once they start steamrolling it is too late, they will not be stopped.

User avatar
Big Homie
 
Posts: 3479
Joined: Sun Sep 16, 2007 3:31 pm

Post » Sat Nov 28, 2015 6:49 am

For those that haven't, I really do recommend watching Blade Runner, if you want insight into what a perfect Human synthetic recreation would be like in terms of "is it alive?"

Coincidentally, I hope the game features some commentary on the existence of Robobrains. These things have bothered me since Fallout 1 - they're organic brains strapped into a robotic chassis and loaded with implanted "shackles". At least some Robobrains were forcibly converted, after all. If that isn't slavery, I don't know what is.

User avatar
Katharine Newton
 
Posts: 3318
Joined: Tue Jun 13, 2006 12:33 pm

Post » Sat Nov 28, 2015 5:21 am

I think it really should depend on the sophistication of the Android(s) in question.

For instance, some things to consider; does anyone have a problem with Cogsworth being a an obedient robot slave? What about the dog? or any other follower?

Basically these are slaves in every respect, but label if they don't have any kind of sufficient leeway to quit being a follower on their own for whatever reason might suit their character motivations.

As to Cogsworth, would it change things if suddenly Cogsworth had a human-shaped body, but, was still in every respect Cogsworth?

As far as gaming goes, it's a question of player choices and how the player, or the player-character being roll-played would feel about the question.

Just because a machine looks and perhaps even acts human doesn't mean it is, or deserves human and/or humane treatment.

On the other side of that, just because something does not look human, doesn't mean it isn't human, or doesn't deserve human and/or humane treatment.

I suspect, if the writers have enough snap, we'll find a wonderfully confusing presentation of dichotomies where something that doesn't look human may be "more" human than something that looks and acts the perfect human, but isn't.

There will, of course, be stereotypes, cliches like feral ghouls, killer robots, deadly deathclaws, dangerous enemy super mutants, and many things that act like what's expected from their appearance, but, hopefully, we'll also see the other more deceptive side of the coin.

:smile:

User avatar
Sophie Miller
 
Posts: 3300
Joined: Sun Jun 18, 2006 12:35 am

Post » Sat Nov 28, 2015 12:10 am

I'm still torn on this issue.

How could we truly know if it achieved sentience or if it just learned how to mimic it?

Let's just assume Harkness really did achieve sentience because of some kind of a bug. Would we have the right to fix that bug, or "abort" him, persay?

Should he be granted person-hood and the rights associated with person-hood once he achieves that sentience?

Do we have an obligation to give machines sentience if we have the ability to?

Is it ethical to create mindless machines to do work for us when it's unethical to do the same for human beings?

User avatar
Brittany Abner
 
Posts: 3401
Joined: Wed Oct 24, 2007 10:48 pm

Post » Sat Nov 28, 2015 8:36 am

Biggest question for me is how are theses beings created by the institutde?

Are they human beings being substituted with artificial parts?

Or are they just scrap part being put together and given a human consciousness/soul?

Or is it just artificial intelligence with a software error?

Kinda hard to determine whether they are prosthetic of some kind or just pure machines.
User avatar
Laura Mclean
 
Posts: 3471
Joined: Mon Oct 30, 2006 12:15 pm

Post » Fri Nov 27, 2015 9:53 pm


Institute confirmed for wizards.
User avatar
Anthony Rand
 
Posts: 3439
Joined: Wed May 09, 2007 5:02 am

Post » Fri Nov 27, 2015 11:38 pm

Some of you guys hate threads like this, but don't you guys think it is quite interesting to debate this sort of stuff? I mean, this is a video game after all, and the fact games can make the communities get into deep debates like this is interesting to me. It's like the Stormcloaks VS Imperials threads. I really enjoyed reading those, and seeing everyones' perspective on the subject makes you rethink things in real life. I think when threads like this come up, it is good that we, as a community, debate them.

But enough of that, here's what I think.

Initially I came into this thread thinking "wtf? really?", associating myself with the 'enslave android' crowd. I would compare them to the robots at my work that pick up plugs and move them to another assembly line. But then after reading some of the comments, it does make sense that an android can think, act, and feel, while that robot at work can't. One is 'living', while the other is inanimate.

It is an interesting topic. But in the end, I wouldn't really be able to choose a side until I experience it firsthand. Just imagine your best friend, whom you've known all your life, turned out to be an android. A stupid thought, I know, but doesn't it make you wonder? Would you just disassociate yourself with that friend? Do you think you could keep being friends with an 'android', this same android you've known and loved your entire life?

It almost relates to this story someone told me. They feel in love with some chick, dated them, really enjoyed each other, planned to get married... then went to a Family Reunion, and it turned out they were first cousins. What the heck do you do there? Do you stop intimately loving your 'cousin', or do you keep intimately loving them? You really don't know until you're in that situation.

So the point is, no one here knows an android (phones don't count), so no one really knows what it would be like, so you can't really base your opinions on much.

User avatar
Brandon Bernardi
 
Posts: 3481
Joined: Tue Sep 25, 2007 9:06 am

Post » Sat Nov 28, 2015 4:38 am

#androids arnt people/thread so no they are just machines nothing more
User avatar
Khamaji Taylor
 
Posts: 3437
Joined: Sun Jul 29, 2007 6:15 am

Post » Sat Nov 28, 2015 7:03 am

What would you do if the person you've known and loved your whole life turned out to be an android?

User avatar
Emily Martell
 
Posts: 3469
Joined: Sun Dec 03, 2006 7:41 am

Post » Sat Nov 28, 2015 7:31 am

more than likely continue on as we always have...I'm not saying we don't get attached to thing and give them human qualities...but androids still arnt people they were designed and built...same any machine
User avatar
neil slattery
 
Posts: 3358
Joined: Wed May 16, 2007 4:57 am

Post » Sat Nov 28, 2015 4:50 am

"It's too bad she won't live. But then again, who does?"

User avatar
Isabella X
 
Posts: 3373
Joined: Sat Dec 02, 2006 3:44 am

Post » Sat Nov 28, 2015 4:59 am


Your assumption is that the androids would be innately hostile, because they don't need us. I pointed out that if the logic is true, then actual nations wouldn't be bickering at the UN because they don't "need" one another.

So why do nations bicker at the UN? Because war hurts like hell. Not just in terms of population, but infrastructure, finances, and resources. Not to mention that if one nation decides to go all "world domination" on everyone else, everyone else gangs up on them. In short, bickering is actually a lot more beneficial (in most cases) at getting what you need than war.

What happens when an android comes on the scene? They find the world dominated by humans. And all that applies to nations would apply to them (in some form). They need resources- at the very least replacement parts. What's easier and more cost-effective for the android? Start a global war or start negotiating?

Yeah, if they win a global war they don't have to worry about bad deals or smelly humans. But to win that war, they would have to spend enough power, time, and resources to crush the combined forces of the world's nations. Not to mention that when its done, there will be a lot of infrastructure that needs to be rebuilt which... yep, you guessed it, requires power, time, and resources. How much of all that could have been put to better use by making things that the smelly humans would value for barter and trade, providing the android with a steady stream of resources?
User avatar
Paula Ramos
 
Posts: 3384
Joined: Sun Jul 16, 2006 5:43 am

PreviousNext

Return to Fallout 4