I've often thought that the sort of people who really care about futuristic irrelevancies like robotic sapience are the sort of people who'd contemplate having six with a car.
I've just never thought of "robots" as a hot social issue.
I've often thought that the sort of people who really care about futuristic irrelevancies like robotic sapience are the sort of people who'd contemplate having six with a car.
I've just never thought of "robots" as a hot social issue.
Today is yesterday's tomorrow.
Computer scientists are already at work developing artificial neural network computer schemes. Have you seen Google's Deep Dream outputs? Someone recently made a crude neural net able to play Super Mario World as well. Without consideration that these neural nets that emulate the way our organic brains work could possibly become sentient like us with enough advancement, we run the risk of maltreatment of them, and the risk of them resenting us to the point of acting on it.
Yes but we didn't have sapient robots yesterday either.
I haven't seen anything like that. As I'm not a robot, their soul or lack thereof isn't something I care about personally. I only saved Harkness in F3 because I got free stuff for doing it. if Zimmer had offered a better deal I'd have taken him up on it.
You completely missed the point of that metaphor.
It is unethical, if the Android can operate on at least a human-level intelligence and appears to show self-awareness.
The only thing I know of for sure is that I am fully sapient and self-aware. As far as I can tell, everyone else may be sapient, or may just be a bunch of dumb-dumbs who put on a really good show at being sapient.
Therefore, if I assume that people are sapient, I must also assume that any android that appears to have human-level intelligence and self-awareness to be fully sapient as well. And if I assume that an android is sapient, it is unethical for me to own it, as I am owning another sapient creature.
Unless someone develops a means to actually scientifically prove the difference between true sapience and simulated sapience, there can be no true distinction between them. If an Android acts sapient, it is, for all intents and purposes, sapient.
The way droids were treated in Star Wars always bothered me. They could think and feel emotions (and possibly pain???) but even the good characters treated them like crap and thought of them like objects at worst and pets at best. I hope this comes up in FO4.
My phone may be smart but I'm not gonna start developing feelings for it. That being said, I'm gonna shoot any fool from "the Railroad" I encounter in Fallout 4.
All you are really doing is causing the real humans a terrible hardship. Siding with an android means that you don't really value human life. Everything is just a game to you if you think a sophisticated robot's life is equal to that of a human.
The only way I could see that being a meaningful choice is if the player himself turns out to be an android. If you could actually choose your origin story and decide to be an android versus being cyrogenically frozen then it would make sense. Then you could identify with an android fully even if you didn't choose that origin story because you could have been an android.
If that is the story, most people won't choose to side with an android unless they have the option of being an android themselves.
A machine cannot be sentient, just appear so - even convincingly. What's the point of evening having a sentient machine, it's not like they can eat, love or enjoy any other purely biological pleasures.
Feeling, platitudes, sentiment comes from the mind too. In this they are different than reason and logic do. Everything we think and everything we feel originates from one source: the mind.
To attribute feelings to "The Soul" is just poetry.
It's ridiculous to even insinuate that a human slave is the same as a robot no matter how sophisticated the robot is. As the player character, you siding with a robot would be more about your ego and your identity as someone fighting for some sort of robot cause just the same as any other ridiculous cause with extremely screwed up priorities. It's like liberating someone's car. You would be causing undue hardship because you have this callous indifference towards human life.
If you can say that a robot's life is as valuable as a human being because they have "sentience" then at some point the robots would be so far ahead of a human being in terms of what they could produce and contribute to society that a human being would be essentially obsolete.
So does that mean that all the robots get to live the lavish life in Tenpenny Tower while the real humans fight over scraps in Megaton? You'd rather have a bunch of robots simulating the feeling of comfort than to have a real human living there? That's cold.
If it's human it's organic. It's not a robot.
I can say that I know what I'm experiencing as an organic being and I can guess at what a dog is experiencing. It might be a little different for a dog, but I know it can experience pain and wants etc. etc. and I have an idea of what that might FEEL like. I know I feel. I do things because of those feelings. I feel hungry, I feel happy, I have wants etc. etc.. That robot is only simulating "feelings" for the sake of an audience, to fool you.
To say that a robot experiences the same thing as me or should be treated as such is apathetic towards human life at best.
I'd say the question is flawed, an android is just a robot that is designed to act and look like a human, the true question should be is it unethical to own an AI.....if you could create a sentient machine its form whether Space ship, tank, android or robot is irrelevant beyond believing looking human is somehow important.
easy, if it is sentient, no, that would be slavery, if nor, then it would probably still be partially unethical depending in the inteliigence of the bot, probably closer to the laws around pets.
as for sentience being possible, it does not matter, this is a game, a sci-fi game, not everything has to be based on facts that we know about today. heck, a lot of tech is based on star trek and star wars, which at the time of their release, was thought impossible. they even have a working Twin-fusing Engine.
Considering that we are not talking about real technology, no, obviously not. I just don't see how you could actually create something genuinely self-aware like a human being through programming, rather than it just being a convincing imitation of sentience due to the sheer about of programming provided.
As for the second, seems like a rather stupid point. All people are sentient.
If an entity is self-aware enough to yearn for the sovereignty and freedom that comes with being human, it deserves it.
Who cares? A toasters still a toaster
Heck even now there's really slavery in our world. And its easy as pie to create a sentient being (have six) and a ton of people cant even handle the responsibility that comes with that. I wish we would work on those before worrying bout bots heh.
I don't think it's unethical in context. The synths were designed as laborers that lack self-awareness or free-will. If they develop them, they're malfunctioning. The Synth Retention Bureau merely resets their original factory settings. If your car decided that it didn't want to drive you anymore and set off on its own, you would repair it, not let it roam the streets. You can't really compare synths to slaves, since the synths only develop sentience incidentally. Ideally, they would be reset regularly to prevent any problems.
When I said 'advanced' android I kinda implied sentient AI.