But that feeling is fake, a programming quirk, a way to mimic humanity.
Where is the line between in effect acting and real genuine feeling?
But that feeling is fake, a programming quirk, a way to mimic humanity.
Where is the line between in effect acting and real genuine feeling?
Why does it bother to mimic upset by oppression?
You can apply the exact same question to other people.
Any minute now, Pete will step in and say "200 posts". Although in reality this is only an approximation
Just because a human can be classified as "alive" does not give it free reign to do what it likes. The value of life to me is determined by its actions, not its genetics. A human that destroys it not worthy of being saved over and android that builds and sustains. It's human sure, but it's life is worthless in my eyes if it cannot be humane.
A bear certainly does know what a warning shot is. There's many a tale of people warding of polar bears in the Arctic with warning shots and making loud noises. Sure some animals won't, and defending yourself is natural, but if you wander into Yao Gui territory you should expect to be killed. Ethically you should have left it alone. If it encroaches on you and ignores the warnings then you'll have to kill it.
The rats may be in your home, but there are options to relocate them rather than simply kill. The options are there, whether the majority chooses to use them or not. Being that we're talking about ethics human nature cannot play a role. Ethics is about being better than your prehistoric programming. The ethical solution will always involve sustaining life unless absolutely impossible to do otherwise, regardless of the hardships it takes.
It is an open forum anyone is welcome to explain the difference.
As to your question I do not know. Would depend on the context of it's programming.
Let's say it is an android and is used as a spy. It would need to show certain degrees of human emotion to fit in and not arouse suspicion whilst conducting whatever it's objective is.
There are a variety of reasons why an android may need to show compassion, sadness, anger, joy etc Human emotions make sense based on context of it's programming
So why don't you?
Naturally but why does it bother to mimic these emotions in the context of wanting freedom? An android created for cleaning or agriculture might have been programmed to show dismay if someone spilled a drink or it's crops failed but why would it choose to use that capacity for expression to clearly express a desire for freedom? That's not part of it's initial programming. Just by wanting it is no longer just a machine.
Wheres the option to select: ""Humans" who cannot respect other intelligences, should not be counted as humans or atleast as non-faulty humans and not have the same rights as real non-faulty humans"?
Just by wanting, it becomes potentially dangerous. It's not bound by its initial programming - that's terrifying.
If top scientists are scared of artificial intelligence, then I'm scared of artificial intelligence - especially when the AI in question can want.
I'm all for "bug fixing" to remove whatever caused the AI to transcend its programming and ensure that future models lack this malfunction. Though, I wonder if it is unethical to create Robotic Servants that lack sentience when you had the ability to grant it? Do we have an obligation to create life if possible? Is it unethical to make mindless husks to do work for us?