redux the Sci-fi-entist wrote:
http://www.bbc.com/news/technology-30290540""We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it," he says."
Seriously, this topic has been addressed in many si-fi movies and the human race always triumphs. Mr. Hawking really needs to pull his head out of his ass and take a look around.
Same thing with all the asteroid and climate change malarkey.
Si-fi tends to come true more often than science does. I'm getting kind of fed up with scientists thinking their theories hold up against real-world empirical evidence.
I didn't read the article, but I doubt Hawking said AI is going to kill us, at least not directly and not based on the current AI we know of. Prof Hawking is certainly smarter than me and perhaps he is thinking so far into the future that's it's beyond my imagination. However, based on everything I know, which is a career spent in IT, it is simply not possibly. Number one, AI is still a computer and a computer is electronic circuitry, which requires electrical power. Without electrical power, which require humans to generate, a computer is more useless than a brick. At its core, all a computer does is use math/logic to executes 'stored' programs/instructions which we create. A computer cannot do anything without us, they need power and input, both of which comes from humans. Now, with that said, it is possible we could become so reliant on computers that they could one day accidentally cause major power outages, a nuclear explosion, etc., but even that would not destroy the human race. - Btw, because a person is smart, it does not mean they know everything and does not say stupid things from time to time.