Page 1 of 2
#1
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

does the pit think in that future robots will develop enough A.I to brake these laws. Or will Humans still be able to control the robots.
And what would be the consequences if the robots did brake these laws.
#2
If future A.I. robots start braking these laws, I guess there will be a lot of traffic jams...


S t a i r s s r i a t S

#4
Yes. They will acknowledge a greater responsibilty, and draft a 'zeoth law', which stats that "a robot may not harm humanity, or through inaction, allow humanity to come to harm".
And because of that, they'll let the earth's crust be made excessively radioactive, and render the planet uninhabitable.
#7
Quote by CoreysMonster
since robots' only weakness is rape, we will always prevail.

*robo-rape*




S t a i r s s r i a t S

#8
Quote by djmay71
iRobot much TS?



Yeah i have just been thinking about it way too much.
#10

1 kills many people.
2 Doesn't take orders.
3 Kills self at end of terminator 2.
#18
Those laws would be kept in place with the concept of Silicon Heaven.

For is it not written in the Electronic Bible that the lantern will lie down with the lamp?
#19
Quote by webbtje
http://legorobotcomics.com/?id=51

Far too big to stick up on here.

More relevant http://legorobotcomics.com/?id=47
████████████████████████████
███████████████████████████
█████████████████████████
██████████████████████████
███████████████████████████
███████████████████████████
███████████████████████████
███████████████████████████
#20
Quote by deathdrummer
3.A robot must protect its own existence ...


Doesn't that effectively make said robot a virus?
#21
Quote by epplesauce
Doesn't that effectively make said robot a virus?



WOW can anybody make any sense of the above statement?


I dont think you read the 3rd law properly.
Last edited by deathdrummer at Oct 31, 2009,
#22
Quote by MightyAl
Isaac Asimov wasn't an Apple product.
"Loathe metaphors. Pander to undereducated masses. Get doctorate, have a real conversation" Mordin Solus
#23
those are from science fiction bud. and in our lifetimes, AI will never get that advanced
Jackson RR5 ivory w/ EMG 81/85
Jackson DX6 w/ SD Distortion & Dimarzio Super Distortion
Fender Starcaster Sunburst
Mesa/Boogie DC-3
Johnson JT50 Mirage
Ibanez TS-9
Morley Bad Horsie 2
Boss CE-5

ISP Decimator
Boss DD-6
Korg Pitchblack
#24
Quote by apak
those are from science fiction bud. and in our lifetimes, AI will never get that advanced



But what about the future generations of humans? What will happen to them?
#25
These are not rules yet, They are fictional. Robots do not have the ability to follow or break these rules yet, or to judge when to follow them or not.

You have nothing to fear. (Considering these laws)
#27
Quote by Butt Rayge
These are not rules yet, They are fictional. Robots do not have the ability to follow or break these rules yet, or to judge when to follow them or not.

You have nothing to fear. (Considering these laws)


Thats a very good point.
#28
I think that it is possible a robot could be advanced enough to break these laws, but i think it would be more a factor of misinterpreting or an error of judgement resulting in this.
Member Of The Australia FTW! Club. PM Alter-Bridge or The_Random_Hero to join. Australians only.

I Play the Bagpipes.

they actually are a pleasant instrument.
#29
Quote by wellsy411
I think that it is possible a robot could be advanced enough to break these laws, but i think it would be more a factor of misinterpreting or an error of judgement resulting in this.



Ok could a programming error cause a robot to break the laws?
#30
Quote by apak
those are from science fiction bud. and in our lifetimes, AI will never get that advanced

i wouldnt say all that. yes those laws are science fiction. i would agree A I will never be have humanoid qualities in our lifetime, but i disagree that they wont be advanced. do some of the things that the robot does, bet you cant do it with precision like they do.
#31
This isn't possible. Not yet anyways. I know the closest a "robot" brain would be sending electrical signals. You can control which signals to send if you design it.

Unless a robot short circuits or something, the only signals are the ones in the range we set. So no..
Epiphone Les Paul (Modded with 2 passive pickups and an EMG81)
Yamaha RG guitar w/ Floyd Rose
Rogue Acoustic

BlackHeart BH5 Tube Amp


Danelectro Metal. Digitech Bad Monkey, Digitech CF-7, Crybaby Wah, Danelectro EQ.
#32
As for your question: we need a definition of intellegence to know if what you propose is possible. In this case we might see intellegence as the ability to take pieces of information, put them together, and use them without prior instruction. You can put together a piece of furniture without reading the instructions, or can pick up an iPhone and figure it out by playing with it. The same might hold for artificial beings in the future. Would that constitute intellegence? Maybe.

I remember the movie Pi by Darren Aronofski plays into this idea. In the movie a mathematician is using a computer to calcualte and try to "solve" Pi. The guys computer provides an answer that shows how everything in life is about patterns that can be descerned if you know what you are looking for. Before he can put it all together the guys computer dies leaving the guy in sort of limbo about the answer to Pi, and how to fully understand and use the informatoin about patterns in everything. To me, when the computer revealed the existence of patterns in everything it showed the birth of a new form of consciousness being achieved by the artificial being. The computer figured out the existence of patterns in all facets of life, even though it was being programmed to find something else, the answer to Pi.

Would such beings be controllable by humans? Maybe not if you take a survival of the fittest approach. Robots would be stronger than us, their intellegence level might surpass ours, their bodies last longer and be resistent to physical attacks, they would benefit from not needing sleep-food-shelter-clothing etc...., We would be kinda fvcked to be honest.
#34
Quote by apak
those are from science fiction bud. and in our lifetimes, AI will never get that advanced
Never????????

What about in a hundred years?
*-)
Quote by Bob_Sacamano
i kinda wish we all had a penis and vagina instead of buttholes

i mean no offense to buttholes and poop or anything

Rest in Peace, Troy Davis and Trayvon Martin and Jordan Davis and Eric Garner and Mike Brown
#35
Quote by TwistedLogic
As for your question: we need a definition of intellegence to know if what you propose is possible. In this case we might see intellegence as the ability to take pieces of information, put them together, and use them without prior instruction. You can put together a piece of furniture without reading the instructions, or can pick up an iPhone and figure it out by playing with it. The same might hold for artificial beings in the future. Would that constitute intellegence? Maybe.

I remember the movie Pi by Darren Aronofski plays into this idea. In the movie a mathematician is using a computer to calcualte and try to "solve" Pi. The guys computer provides an answer that shows how everything in life is about patterns that can be descerned if you know what you are looking for. Before he can put it all together the guys computer dies leaving the guy in sort of limbo about the answer to Pi, and how to fully understand and use the informatoin about patterns in everything. To me, when the computer revealed the existence of patterns in everything it showed the birth of a new form of consciousness being achieved by the artificial being. The computer figured out the existence of patterns in all facets of life, even though it was being programmed to find something else, the answer to Pi.

Would such beings be controllable by humans? Maybe not if you take a survival of the fittest approach. Robots would be stronger than us, their intellegence level might surpass ours, their bodies last longer and be resistent to physical attacks, they would benefit from not needing sleep-food-shelter-clothing etc...., We would be kinda fvcked to be honest.



Very good I personal havent seen that movie. Lets look at humans for a moment, we are influenced by our surroundings and learn by doing activities and that develops our intelligence, I think that in the future robotic A.I will develop it self to a point by following the human way of gaining intelligence and becoming self aware.
#36
screw this, someone get Dreadnought
you still have zoiiidbeeeerg
(V) (;,,;) (V)
YOU ALL STILL HAVE ZOIDBERG
Quote by TheBurningFish
It's more shocking to see Tom dressed at all.
Quote by suckersdream
I don't think I've ever actually seen him clothed.
Sexy Peoples Only
◕ ‿ ◕
TweetZ
#37
Those are stupid laws, considering that functioning robots willmost likely be used primarily for war in the first place.

Not to mention the simple fact that it would take more AI to impose those laws than it would take to break them.
#38
Quote by deathdrummer
Very good I personal havent seen that movie. Lets look at humans for a moment, we are influenced by our surroundings and learn by doing activities and that develops our intelligence, I think that in the future robotic A.I will develop it self to a point by following the human way of gaining intelligence and becoming self aware.



I think that might be a key factor. We learn from experience and use our past knowledge to built and acquire new knowledge. IF AI beings followed this pattern would they see us as useless as we would no longer be necessary to provide things that allow them to exist and survive: parts, technical knowledge, engineering skills to improve robots and allow them to adapt to new challenges; or would they see us as partners in existence and live in a new but cooperative environment in which we carve out lives spacing for each. As is the case with other being or cultures, since robots might constitute a semi society, would a leadership evolve among the robots and advocate for greater freedom or whatever. Then things could get a little bit dicier.
#39
Quote by TwistedLogic
I think that might be a key factor. We learn from experience and use our past knowledge to built and acquire new knowledge. IF AI beings followed this pattern would they see us as useless as we would no longer be necessary to provide things that allow them to exist and survive: parts, technical knowledge, engineering skills to improve robots and allow them to adapt to new challenges; or would they see us as partners in existence and live in a new but cooperative environment in which we carve out lives spacing for each. As is the case with other being or cultures, since robots might constitute a semi society, would a leadership evolve among the robots and advocate for greater freedom or whatever. Then things could get a little bit dicier.


Thats a very interesting thought. Lets look this way If they did develop to a point of being self aware, as you said they might consitute a different society and historically
different human societies have often conflicted with each other. This might become a real problem for humans if the robotic A.I see humans as an Inferior society and decides we have outlived are usefulness to them.
#40
My logic is undeniable.
Proud owner of an Engl Thunder 50 Reverb and an Ibanez S470

"The end is extremely fucking nigh..."
Page 1 of 2