The A.I President(That could of been a problem)Spoiler

Post » Fri Apr 01, 2011 5:35 pm

A.I are computers that are almost human in personality and thought, they are no diffrent then appearance, now in everything ive heard about A.I they tend to turn againt humans? A.I are usualy meant to be slaved to the human race and A.I are meant to do what humans will, and some way theyll find a way to question there master( What there purpose do i have a sol?) Example,I.AM Robot the A.I robots turn againt the human, another one is , Mass Effect the geth they turn againt the Quarians, there may be more but here is my reason? WHAT if the computer president went rouge.
User avatar
Agnieszka Bak
 
Posts: 3540
Joined: Fri Jun 16, 2006 4:15 pm

Post » Sat Apr 02, 2011 3:15 am

Eden wasn't designed to become a fully aware A.I, it's something that happened when it did a little too much reading, so there was nothing programmed on him to stop it from going crazy.
User avatar
Jessica Raven
 
Posts: 3409
Joined: Thu Dec 21, 2006 4:33 am

Post » Fri Apr 01, 2011 5:06 pm

, it's something that happened when it did a little too much reading, azy.


Although I tend to encourage reading as-musch-as-possible - I know a few people that would apply to... nothing worse than a fool who has his/her biases encouraged.

"I seen it'n a book", daaaar...
User avatar
Andrea P
 
Posts: 3400
Joined: Mon Feb 12, 2007 7:45 am

Post » Sat Apr 02, 2011 4:15 am

....now in everything ive heard about A.I they tend to turn againt humans?


We're not even close to having an AI that approaches being self aware, so we have no way of knowing how an artificial being would respond if it did reach that level of consciousness. All you're basing your conclusions on here is fictional perceptions of AI. For all we know, constructed beings may turn out to be so logical that they come to realize that there's no real benefit in turning against humans, and may in fact feel the need to help them rather hurt them. But having a kind and helpful AI doesn't necessarily make for a good storyline.
User avatar
Harry Leon
 
Posts: 3381
Joined: Tue Jun 12, 2007 3:53 am

Post » Sat Apr 02, 2011 2:53 am

We're not even close to having an AI that approaches being self aware, so we have no way of knowing how an artificial being would respond if it did reach that level of consciousness. All you're basing your conclusions on here is fictional perceptions of AI. For all we know, constructed beings may turn out to be so logical that they come to realize that there's no real benefit in turning against humans, and may in fact feel the need to help them rather hurt them. But having a kind and helpful AI doesn't necessarily make for a good storyline.



Actually we are close to having self aware AI. Those in the UK can check out BBCiplayer for a documentary called Visions of the Future, not sure how much longer it will be on the site before it's replaced by a naff sitcom though.

I thought the comment about Humans becoming either pets for AI or a food source was interesting........ !!
User avatar
Ilona Neumann
 
Posts: 3308
Joined: Sat Aug 19, 2006 3:30 am

Post » Fri Apr 01, 2011 8:48 pm

Asimov (IIRC) proposed ruled for AIs so that they would never turn on humans.

I think (in truth) the problem with the AI model is how do you give something the power to think but likewise keep it from turning on you. Any physical safeguard might be overcome if the AI figures out how.

When machines are treated as slaves, they might come to resent that status. Yes, resentment is an emotion, but logically, why would a "race" of machines that can think, reason, build, repair, evolve, etc. accept a subservient role to flawed humans? Logically, they would not accept being our servants.

Of course, you could get the opposite side where machines realize we need to be cared for, so we are enslaved for our own good.
User avatar
Matthew Aaron Evans
 
Posts: 3361
Joined: Wed Jul 25, 2007 2:59 am

Post » Sat Apr 02, 2011 2:08 am

Actually we are close to having self aware AI.


Self aware is one thing, but having a high enough intelligence as well is another. Computer AIs are essentially very stupid at this point. There's no way that any machine now or in the near future could decide that humans were an inferior species and needed to be exterminated, or enslaved. That would require a level of deductive reasoning that no AI is anywhere near achieving yet.
User avatar
Mel E
 
Posts: 3354
Joined: Mon Apr 09, 2007 11:23 pm

Post » Fri Apr 01, 2011 5:23 pm

Self aware is one thing, but having a high enough intelligence as well is another. Computer AIs are essentially very stupid at this point. There's no way that any machine now or in the near future could decide that humans were an inferior species and needed to be exterminated, or enslaved. That would require a level of deductive reasoning that no AI is anywhere near achieving yet.


Yep I agree Belanos, it was pretty boggling stuff they were saying, logical at same time though.

I don't think the decision would be about extermination but more of a case of Computers /AI seeing Humans as an answer for a problem which might not be too agreeable from a human perspective....... I don't think we have a matrix situation round the corner or anything like that.

One thing I found very interesting is now computers are able to look at an image and decipher whats going on all around them, as you say computers can have better eyes than humans but they can't interpret what they are seeing. That has now changed, we have computers that can understand what is around them. I didn't have a clue this had advanced that far!
User avatar
Sierra Ritsuka
 
Posts: 3506
Joined: Mon Dec 11, 2006 7:56 am

Post » Sat Apr 02, 2011 3:20 am

Truth is isnt AI already self aware "artafitial intelagence"
User avatar
Kelsey Hall
 
Posts: 3355
Joined: Sat Dec 16, 2006 8:10 pm

Post » Sat Apr 02, 2011 6:29 am

Self aware is one thing, but having a high enough intelligence as well is another. Computer AIs are essentially very stupid at this point.


Ah, but their ability to grow is exponential. How long until they become smart enough?
User avatar
Emily Jeffs
 
Posts: 3335
Joined: Thu Nov 02, 2006 10:27 pm

Post » Sat Apr 02, 2011 2:19 am

Computers are limited by the very nature of their programming. As smart as they are programmed to be, whether it be face recognition or image anolysis, they cannot stray outside of their programming, and this is what has modern scientists frustrated. Because a computer is programmed by humans, it is not possible for a computer to think of unlimited scenarios or get up and dance a jig. Why can computers look around a decipher? They can decipher images because humans told them to do so, how to do so, and gave them the programming to do so! A machine does not think, and will never be able to think because of the nature of what it is: a cold, unliving hunk of silicon and bits of wire. The only thing a computer can do is calculate because of the binary system it is based on.

Take, for example, the AI in a computer game: as advanced as the "AI" has become, it will never be REAL AI. The fact is simply that when your character is being fired upon, or swung at, the computer is calculating the odds of it not hitting, not its hit percentage; a computer would hit the target 100% of the time if given the opportunity.
User avatar
мistrєss
 
Posts: 3168
Joined: Thu Dec 14, 2006 3:13 am

Post » Fri Apr 01, 2011 10:39 pm

Ah, but their ability to grow is exponential. How long until they become smart enough?


We're still many, many years away from having a computer intelligence that will even come close to our own, if we even manage to get it to that point.
User avatar
Claudia Cook
 
Posts: 3450
Joined: Mon Oct 30, 2006 10:22 am

Post » Fri Apr 01, 2011 6:56 pm

Computers are limited by the very nature of their programming. As smart as they are programmed to be, whether it be face recognition or image anolysis, they cannot stray outside of their programming, and this is what has modern scientists frustrated. Because a computer is programmed by humans, it is not possible for a computer to think of unlimited scenarios or get up and dance a jig. Why can computers look around a decipher? They can decipher images because humans told them to do so, how to do so, and gave them the programming to do so! A machine does not think, and will never be able to think because of the nature of what it is: a cold, unliving hunk of silicon and bits of wire. The only thing a computer can do is calculate because of the binary system it is based on.

Take, for example, the AI in a computer game: as advanced as the "AI" has become, it will never be REAL AI. The fact is simply that when your character is being fired upon, or swung at, the computer is calculating the odds of it not hitting, not its hit percentage; a computer would hit the target 100% of the time if given the opportunity.


I don't think anyone percieves computers as emotional beings that think, I certainly don't! but I do think compuers and AI have the ability to make decisions. The information that feeds them in order to make these decisions is growing and requires less input from humans to do so. We did show them how to do this in the first place, but despite my inteligence I doubt I would have grown up to learn speaking English if it weren't for other English speaking humans teaching me how.
User avatar
N Only WhiTe girl
 
Posts: 3353
Joined: Mon Oct 30, 2006 2:30 pm

Post » Fri Apr 01, 2011 11:00 pm

We're still many, many years away from having a computer intelligence that will even come close to our own, if we even manage to get it to that point.


Heck, if you can get a computer to watch NASCAR and drink beer, it'd be smarter than a whole lot of people. :evil:
User avatar
trisha punch
 
Posts: 3410
Joined: Thu Jul 13, 2006 5:38 am

Post » Fri Apr 01, 2011 5:39 pm

Asimov (IIRC) proposed ruled for AIs so that they would never turn on humans.


That was the Three Laws of Robotics. I don't recall the exact wording, but it was something like this:

1. A robot may not harm a human, nor through inaction allow a human being to be harmed.
2. A robot must obey any orders given it by humans except where such obedience interferes with the First Law.
3. A robot must preserve its own existence except where it would interfere with the First and Second Law.


We're a very very long way from creating true artificial intelligence. We're not going to be seeing any Lieutenant Commander Datas any time soon.
User avatar
Jon O
 
Posts: 3270
Joined: Wed Nov 28, 2007 9:48 pm


Return to Fallout 3

cron