I can read, can you? This thread is about whether Synths deserve rights. Whether or not you consider synths someone or something has quite bearing on the subject.
Or were you just looking to lash out at someone?
I can read, can you? This thread is about whether Synths deserve rights. Whether or not you consider synths someone or something has quite bearing on the subject.
Or were you just looking to lash out at someone?
So stop calling the "Human Rights" and just called them "Rights".
This is a logical fallacy known as "Appeal to Consequences."
-signed, the logician.
A created self-contained application on a phone could theoretically be sentient, in which case some believe it should (must) be granted rights. Notice the important characteristic, here, is not "application on a phone" but "sentient." A robot that mimics a human but has no personal thought is distinct from a robot that mimics a human and also has personal thought. We're discussing the latter.
Lol it's recall code.
I am just making a point that if you break down this conversation there has to be a point at which we realize:
1. Synths were a created machine by humans
2. Their programming, albeit complex, was created on a computer by humans
3. They were made for the sole purpose of serving the institute.
4. It was an unintended by product that they have dreams/personalities/etc.
And this unintended by-product is exactly why there's a moral and ethical question. Just because it was an accident doesn't mean you can ignore it. Hell, most of us are unintended by-products. Ask your parents.
Um do you have a pavlovian reaction to "power down" to a string of numbers?
It's different because it was hard programmed and CANNOT be overriden.
I think this is the big difference here, the way a synth is hard programmed cannot be overriden by that synth while humans have the true free will to change the way they think.
No matter how much a synth doesn't want to an institute creation, it's still an institute creation.
Agree up to point 4. I think various dialogues, terminals etc., make it clear it was not an "unintended conseqeunce" to make synths as much like real humans as possible. Indeed, that was the whole point of Gen 3s.
Combine this with the fact that, most Institute scientists maintain adamantly that these things they've created striving for the goal of "synthetic human" are NOT human, and you have the essence of despicably unethical "mad scientist," which is exactly how Bethesda meant them to be.
I might. It depends on who instilled the response. And how do you know that it cannot be over-ridden? There are exceptions to every rule and I am sure there is or will be a synth that can override it's own programming.
Of course, but the brain is a bio computer designed by nature (God/higher power if you like) over the course of thousands and thousands of year. Not something someone built (possibly out of bio material but probably not in this instance) over the course of 250 years. That synth brain started as straight up programming. Lights and Clockwork. Whatever it upgraded to doesn't matter. Pointing that out is like telling us that we are drinking H2O instead of water.
Did you, and everyone in this thread, not understand what I was saying? Sorry I used a meaningless phrase to describe something that's complex.
Metaprogramming is the writing of computer programs with the ability to treat programs as their data. It means that a program could be designed to read, generate, anolyse or transform other programs, and even modify itself while running.
It's still based on something. I don't call a french frie a meta potato. The "computer" is still using data points from other programs and inputs.
I say that based on the convo in robotics where the guy in charge of creating synths is telling the other guy who thinks they have dreams that he is a cook. Yes it is probable that they do have dreams, but it was not intended in their creation as per the head of the facility that creates them heh.
Not to mention they all treat synths as though they truly do feel (as in pleasure, pain, etc) but not as though they have true feelings (happiness, sadness, love, hate, etc).
So from all evidence I have gathered it seems that they wanted a being that seemed human enough to fool other humans when they made the Gen 3. The funny thing is even the railroad doesn't care at all about Gen 1's and will happily gun them down all day. What if the institute started putting the complex programming of Gen 3's into Gen 1's as well?
What's amusing to me is that this almost the exact argument that I had in college with my Philosophy professor when the question was: "Is the human brain a computer". I argued the affirmative.
He hated it, but I still got an A+.
Emphasis mine in your quote above.
The practical implication of this is that the generated (not generating) program is outside of our control. Currently, this is not strictly true in most uses, but there is actually no known theoretical reason why it can't be true. This detail is in fact why Elon Musk spends so much money and press time on the problem, by the way. EDIT: I just realized you googled metaprogramming. It turns out there are a lot of implications that aren't covered in the blurb. Heads up. Signed, the comp sci.
The difference between a french fry and a metapotato is that french fries don't wander off into the kitchen to invent their own potatoes. That's why you don't call them that.
As for your other points: when you use "1's and 0's" as if it diminishes the product represented by it, don't be surprised when it's pointed out that everything in the world could be represented by it. The only reasonable point you could be making is "nothing really matters!" But that's already been covered in this thread, too. Also, what's the principle that privileges accidental creation over intentional creation?
Ah okay. Good point. I was not quite catching the issue that was being identified as "unintended."
You're right, it's arguable that the human brain could or could not be classified as a computer. It is not arguable however that synth brains were created by a human scientist, not grown, and therefore have nothing more in them than was put in them regardless of their capacity of thought.
Humans can argue their existence and whether or not they were created. But synths were created in a lab by a human, and cannot logically do any of those things unless there is a problem with their logic programming lol.
Then by your def
Then by your thinking Synths are Sentient and Sapient. I say this because of the discussions you have with Nick as a companion on this very topic. Nick isn't even a Gen 3 and it is clear he understands the ramification of implanted memories and what makes him "alive." If he can debate his own existence then that tells me he is self aware. By the way I do agree with your definitions here and they are well said.
Being only in my second semester with C++ and Java (plus some previous stuff w/ Javascript, HTML, etc.) I'm way far from that level, but I SO would love to be involved in writing that code!
Guess I have a bit of mad scientist in me after all . . .
It shouldn't if you believe in truth (not being snide. Actual arguable question). Fallacies are untrue by definition. If you don't think "untrue" is a hindrance, then by all means.
Total aside (so I can show off, too!) - start with transpilers. Or, if you know javascript, you can write a node.js app that generates client-side javascript. Congratulations, you have metaprogrammed!
I mean, not the really interesting metaprogramming, but that's just a complexity thing.
Or induced amnesia through head trauma or some other factor.
As we currently have technology that "learns", thereby adding to the programming that was originally installed, why is it unreasonable to believe that the same thing is going on with the synths?