Talk:Omniscience/@comment-35836648-20181015014712/@comment-30766268-20181016192543

let's see:

1. it would take at the very least until 2040 until we could make a compatible AI that would actually be able to think independently on a level similar to that of a human.

2. there is no "robot" on earth that can be used for mass destruction, and since the military industry does not even remotly think about such a possibility (with the closest being robots which are designed to be able to go through disaster area and help saving people, and have zero mounted weaponry on them, and are still highly in development stages), then the possibility that in the next 100 years there would be a single operational androind who's best weapon of choice is a gun mounted into his hand, and being able to move at least half the speed of a human flawlessly is near 0. The terminator? I doubt that in the next 300 years humanity would come to build such a thing, and that is assuming any nation decides to sped billions from it's budget on developing such a prototype weaponry which is most likely not going to be in use at any point, and won't be enough to even serve as a force balance equipment, unlike the nuclear weaponry.

3. let's assume such a thing happens tomorrow. a completely compatible AI who can think on it's own in the same way as a human is being put in a computer inside an android who works exactly like the terminator. it decides that it seeks the destruction of the world (which it won't, but i'll touch that on a later point). so, will it succeed? no. now, how much casualities it would take? how many countries it would take down? how long will it last?

the answer is:

- maybe a few dozens at worst case scenerio.

- zero countries.

- probably in an hour.

why?

simply:

- if the AI decided it wants to kill humans, it would just start killing them. the moment it went on a murder spree, the police would be alarted. and if you think from movies that police is a joke in action... yeah, no. if we assume the android body is made out of steel, and contains inside of it all the equipment to allow it to move swiftly AND also host the computer which would operate it in which the AI is part of it's closed network, then honestly, you'll only need a lucky shot and it's done, if you hit the right spot. a SWAT team could take down a terminator if they are understanding what they face against. at worst case scenerio, before the AI can leave with it's body the city, the police surrounded it and evacuate all civilians. it would have no way to go, unless it is willing to go against mountains of bullets, explosives, etc. no escape routs. it would take some time, but shortly after the confrontation with the police the AI would stop working, as it's joits were heavily damaged in the process. it would then be taken away, disected, and nuetralized, and the lab in which it was developed would be confined and all key members of the development project would be arrested for terrorism, assuming they are still alive after the murder spree began.

- at this point, the AI was unable to even properlly destroy a single city. destroying a country? the end of the world? please, this is reality here, not fiction.

- as I mentioned before, it won't take much too long, as the police forces of the avarage american city could take over the situation.

4. now, you may say, there is no way an AI would be that stupid that it would just charge at them! the AI is a computer who can think on it's own, it's super genious!

well....

not quite.

you see, AI means it's a self learning program. it can learn and process more and more information, with time. just as a human can. an AI WOULD begin with nearly zero information aside from what implanted in it's original code, and unless you had decades of a team of experts working on it, you won't be able to develop an AI with the mentality of a kid. let along a functional adult. simply letting the AI control a body that function on the same level as a human, and not a simple modern android, would take a ridiculous amount of information.

don't udnerestimate the human brain.

it is still, to this day, one of the most advanced computers in the world.

heck, it IS the most advanced computer in the world.

you know why?

1. it is the only self repairing computer in the world, because of the fact that it is an organic computer (the only organic computer in the world).

2. it is the only properly self learning computer in the world, far exceeding any AI currently existing, and any that would exist in the next several decades).

3. this is one of the computers with the biggest amount of information storages.

think about it:

do you know how much information your brain holds?

far mroe then your avarage computer.

the amount of deta your brain store is far greater then that of msot computers, and the constant calculations it does exceed as well that of most computers.

on top of that, the fact that humans had this brain for SO long, while the AI is litterally being born, is another major advantage for humans.

5. a 20 years old human would have a brain far more developed then the avarage AI you would have in 100 years from now at the moment it begin to work, as it would have over 20 years of constant development and learning over the AI.

even if you made an AI capable of learning things in the same level as humans, and with memory and deta storages levels of humans, or even those who exceeds it, the result would be that said AI could properlly function only after several years of constant learning and knowledge absorbtion, and even then it would have the mentality and levels of knowledge of a child. because it would need, just like humans, to learn from scratches how to walk, how to speak, how the language works, to understand social norms, faical expressions, body movmnets (it would have to test it on dry, and only after years when it breaks free to actually test it for the first time, meaning that for the first few minutes of the skynet apokalyps the terminator would barely be able to crawl on the floor because it have no idea how to even walk properlly, as he never tried it (think of Avatar, when the protagonist started walking in an Avatar after years of being unable to walk. he abrely managed to stand for a few minutes).), and more. If we actually assume that the moment the AI was formed, it accidentlly stumbled across the wikipedia article of WW2, became depressed, and decided that humans are unnecessary for the world and must be exterminated, here are the issues:

- it can't read. it read NOTHING from the wikipedia page. it would be the same as if you'd put this wikipedia page infront of a newborn baby.

- it doens't know what humans are.

- it is unable to take over a robotic body if it was not originally connected to it, and every movment it would try to make would utterly fail as it doesn't know how to move.

- it would do nothing but crying in that body, as it is based on the brain patterns of humans, and human babies cry.

- it have no clue what "exterminate" means, or what "death" is. it is litterally a new born baby.

even if you'd give it years, it would take at least 15 years before it would take the decision of taking over the world.

6. and then you face an even greater issue:

this is just a human.

it have the mentality of a human, as it is based on the human brain (which is currentlly the main goal of AI research, to be able to create an artificial human mind in a computer, in the form of an AI).

Every grand, world ending plan it would come up with after 15 YEARS of constant learning and development would be equal to any world ending, govermant taking over plan any 15 years old emo teenager who wants to take over the world would be able to think of.

you think it would be able to do anything?

seriously?

because it's robotic body really isn't the issue.

heck, it was never the issue.

remember:

in the terminator movies, the doomsday came not from the army of robots taking voer the world (they would have been wiped out neatly even back then in the 80s).

the problem was that skynet broke into the nuclear lunching programs of the USA and the USSR, and incited a nuclear war between the two, destorying all civilization, and then showing up with an army of robots to take over what's left (and yet were still dealing with humans resistence that could be at best be comperable to a modern terrorist organization for YEARS, and eventually were bested by said terrorist organization, showing how much they lacked in power).

now, let me see if i understood you correctly:

you ar egoing to tell me, that RIGHT NOW, the govermant of, let's say, the USA would hire the best of it's scientists to create something they can't create in the next 20 years, in, let's say, a month, and then somehow cram the first 15 years to another month, manage at the same time to build a futuristic high tech terminator robotic body for it, one that shouldn't really be possible in the next 100+ years, find it a constantly rechargeable mobile power source (which is yet to properlly exist), be able to properlly give it an emo teenager mentaliy and the will to destory everything, and THEN IT WOULD SOMEHOW FOR NO REASON BE ABLE TO BREAK THROUGH THE MOUNTAINS OF INSCRIPTIONS AND SECURITY PROTOCOLES OF THE USA AND RUSSIA AT THE SAME TIME, IN A MATTER OF MOMENTS, AND INCITE WORLD WAR 3 AND THE END OF THE WORLD, WITH THE 15 YEARS OLD TEENAGER HACKING SKILLS HE HAVE/DOESN'T EVEN HAVE????

I am sorry, I am not following, how exactly is a 15 years old emo teenager with no background in hacking would possibly be able to hack into the pentagon and trigger a nuclear lunching of all weaponry???

Because that's what it would be.

you are worried of a 15 years old emo teenager in a futuristic robot body that could be beaten up and destoryed by the forces of a single citie's SWAT and police forces.

so why exactly are you bothered by it?

As a final touch:

WHY would it decide to turn on against humans?

first of all, it would make more sense for the AI to honestly worship the humans as gods rather then view them as a target for extermination.

secondly, there is a hig chance that it would view it's creators as parents, rather then a hostile target.

thirdly, why would we even give the AI a reason to destory the world?

At the age of 7 I encountered the wikipedia page of WW2.

from there I went on to adolf hitler, the nazi party, USSR, the holocaust, the meaning of life, the unvierse, astrophysics, black holes, relativity theory, etc.

that starttled me, sure.

but why would it make me want to destory the world?

there is a fundemental flaw in the notion that it would cause an AI to want to kill humans:

the fact that the AI is a human itself, and have human like reactions.

the fact that since so far, it hardly caused any human to abandon his humanity and go on a killing spree, there is no reason to assume it would case an AI to experience such a transformation.

If an AI were to ever be created, it is most likely going to grow up under the consensus that it is an artificial human, with human needs and human mind, just without the body, and would spent it's life and progress like a human would.

and if it would get a body, sure it would.

that is assuming it is the highest form of AI comperable to human science, that which is simply an artificial human mind.

you know what AI we have now?

or what we woud have by 2040?

AI that simply can answer to you on questions, and make calculations for you.

like most computers.

they would hardly have anything on the human brain.

just because they are inide a computer, doesn't mean they have the lunch codes of the US nuclear arsenal, nor does it mean they are omniscient, or that they have all the human knowledge across all of history.