Ckekekdi wrote:
Entrepreneur does not equal Oracle.
Quick! Someone ask Larry Ellison!
Ckekekdi wrote:
Entrepreneur does not equal Oracle.
Quick! Someone ask Larry Ellison!
Killer Robots are coming wrote:
http://www.dailymail.co.uk/sciencetech/article-5110787/Elon-Musk-says-10-chance-making-AI-safe.htmlTesla's Elon Musk warns we only have 'a 5 to 10% chance' of preventing killers robots from destroying humanity
Elon Musk was speaking to employees at his firm, Neuralink, this month
He said that efforts to make AI safe only have 'a 5-10% chance of success'
The warning comes shortly after Musk said that regulation of AI was drastically needed because it's a 'fundamental risk to the existence of human civilisation'
Read Asimov's 3 laws of robotics. If that programming was implemented, it would eliminate Musk's doomsday scenario.
John Utah wrote:
Of the infinite negative possibilities that exist in the world related to our future, this one ranks extremely low in terms of realistic probability. Resources better spent elsewhere.
You don't follow robotics much do you?
Killer Robots are coming wrote:
http://www.dailymail.co.uk/sciencetech/article-5110787/Elon-Musk-says-10-chance-making-AI-safe.htmlTesla's Elon Musk warns we only have 'a 5 to 10% chance' of preventing killers robots from destroying humanity
Elon Musk was speaking to employees at his firm, Neuralink, this month
He said that efforts to make AI safe only have 'a 5-10% chance of success'
The warning comes shortly after Musk said that regulation of AI was drastically needed because it's a 'fundamental risk to the existence of human civilisation'
Most likely it has already happened with nano technology. Bacteria Used To Make Radioactive Metals Inert, can also be genetically modified to turn the planet into a toxic environment, unfit for human life.
Are we sure that's even Musk? He looks more like an animated wax figure every day.
Here you go wrote:
1. Unplug. Humans can survive without power. Machines can’t.
Ummm . . . and machines can't control the power system why exactly?
I, Robot wrote:
Read Asimov's 3 laws of robotics. If that programming was implemented, it would eliminate Musk's doomsday scenario.
Can't tell if you are joking with that incredibly naive and simplistic "solution".
Guess, I'll have to give you solid credit - 7/10
Good job!
Boston Robots wrote:
John Utah wrote:
Of the infinite negative possibilities that exist in the world related to our future, this one ranks extremely low in terms of realistic probability. Resources better spent elsewhere.
You don't follow robotics much do you?
He's a Trumpette. He don't need no stinking knowledge! He's got very good intuition. The BEST intuition!
Con Man wrote:
This con man cannot make a profit with any of his companies.
The degree to which you LR boneheads are completely unable to grasp the logical relationships (or lack of any) between separate propositions is really amazing.
I'll speak really slowly, to try and help:
1. Yes, Musk is probably at least something of a con man, and yes, it's entirely possible that all his companies, and he, will be bankrupt in another ten years.
2. This has NOTHING to do with the validity of his opinions on AI or anything else. If you can't separate the two, you're a major moron.
3. His opinions about AI, and the likely consequences, are probably *right.*
Yeah, of course, the idea that he can assign a specific probability to it is fairly ridiculous, but who cares?
If you know much about the issue, you recognize that there's an excellent chance Mankind will not survive the advent of AGI (or whatever goofy acronym or label you prefer), and no one's really doing anything about it, and no one is *going* to do anything with any real chance of preventing it.
Regardless of what you think of his business prospects, or accounting practices, or personality, etc., he's smart enough to recognize this, and cares enough to try and use his position to raise consciousness about it.
In both respects, he beats the hell out of most of you pinheads.
sp2 wrote:
Con Man wrote:
This con man cannot make a profit with any of his companies.
The degree to which you LR boneheads are completely unable to grasp the logical relationships (or lack of any) between separate propositions is really amazing.
I'll speak really slowly, to try and help:
1. Yes, Musk is probably at least something of a con man, and yes, it's entirely possible that all his companies, and he, will be bankrupt in another ten years.
2. This has NOTHING to do with the validity of his opinions on AI or anything else. If you can't separate the two, you're a major moron.
3. His opinions about AI, and the likely consequences, are probably *right.*
Yeah, of course, the idea that he can assign a specific probability to it is fairly ridiculous, but who cares?
If you know much about the issue, you recognize that there's an excellent chance Mankind will not survive the advent of AGI (or whatever goofy acronym or label you prefer), and no one's really doing anything about it, and no one is *going* to do anything with any real chance of preventing it.
Regardless of what you think of his business prospects, or accounting practices, or personality, etc., he's smart enough to recognize this, and cares enough to try and use his position to raise consciousness about it.
In both respects, he beats the hell out of most of you pinheads.
This ^ Well said!
But I do take exception to one item in your post. "Pinheads"? Perhaps "nanopinheads" would be more accurate.
The actual scientists who spend their lives thinking about this stuff by-and-large firmly disagree with Musk's bloviating.
https://www.inc.com/jessica-stillman/top-mit-ai-scientist-to-elon-musk-please-simmer-do.html
Of course a major AI researcher says they're safe. That's what their livelihoods are based off of! That's how they're getting funded! It's funny because he actually raises concerns about AI himself in terms of self driving cars, and admits that AI researchers have no clue how to control AI " I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong]."
Harambe wrote:
The actual scientists who spend their lives thinking about this stuff by-and-large firmly disagree with Musk's bloviating.
https://www.inc.com/jessica-stillman/top-mit-ai-scientist-to-elon-musk-please-simmer-do.html
No offense but the article that you linked to:
1) Does not in any way support your claim
2) Does not supply any rational argument against Musk's position
Good job!
Recog wrote:
Boston Robots wrote:
You don't follow robotics much do you?
He's a Trumpette. He don't need no stinking knowledge! He's got very good intuition. The BEST intuition!
It’s funny when dipshits like you think that you know something, or anything.
sp2 wrote:
Con Man wrote:
This con man cannot make a profit with any of his companies.
The degree to which you LR boneheads are completely unable to grasp the logical relationships (or lack of any) between separate propositions is really amazing.
I'll speak really slowly, to try and help:
1. Yes, Musk is probably at least something of a con man, and yes, it's entirely possible that all his companies, and he, will be bankrupt in another ten years.
2. This has NOTHING to do with the validity of his opinions on AI or anything else. If you can't separate the two, you're a major moron.
3. His opinions about AI, and the likely consequences, are probably *right.*
Yeah, of course, the idea that he can assign a specific probability to it is fairly ridiculous, but who cares?
If you know much about the issue, you recognize that there's an excellent chance Mankind will not survive the advent of AGI (or whatever goofy acronym or label you prefer), and no one's really doing anything about it, and no one is *going* to do anything with any real chance of preventing it.
Regardless of what you think of his business prospects, or accounting practices, or personality, etc., he's smart enough to recognize this, and cares enough to try and use his position to raise consciousness about it.
In both respects, he beats the hell out of most of you pinheads.
Yes, the aliens are going to attack us as well. Run for the hills.
Boston Robots wrote:
John Utah wrote:
Of the infinite negative possibilities that exist in the world related to our future, this one ranks extremely low in terms of realistic probability. Resources better spent elsewhere.
You don't follow robotics much do you?
I live in reality. Apparently you and EM live in science fiction world and think the real world is coming to an end because a machine can do a box jump and a back flip. One or both of you may be autistic.
John Utah wrote:
Recog wrote:
He's a Trumpette. He don't need no stinking knowledge! He's got very good intuition. The BEST intuition!
It’s funny when dipshits like you think that you know something, or anything.
Ah, more stupidity from johnnie boy!
Good job!
PIO! wrote:
Harambe wrote:
The actual scientists who spend their lives thinking about this stuff by-and-large firmly disagree with Musk's bloviating.
https://www.inc.com/jessica-stillman/top-mit-ai-scientist-to-elon-musk-please-simmer-do.htmlNo offense but the article that you linked to:
1) Does not in any way support your claim
2) Does not supply any rational argument against Musk's position
Good job!
No offense, but does Mr. Musk have calculations for his estimate of 5-10% chance or his that number pulled out of his ___? I'd like a full statistical analysis on how he arrived at the 5-10% number. Thanks.
Here you go wrote:
1. Unplug. Humans can survive without power. Machines can’t.
2. I think someone just watched battlestar galactica.
Exactly, some AI silicon tw*t starts any shite with me, I'll switch them off from the mains.
sp2 wrote:
Con Man wrote:
This con man cannot make a profit with any of his companies.
The degree to which you LR boneheads are completely unable to grasp the logical relationships (or lack of any) between separate propositions is really amazing.
I'll speak really slowly, to try and help:
1. Yes, Musk is probably at least something of a con man, and yes, it's entirely possible that all his companies, and he, will be bankrupt in another ten years.
2. This has NOTHING to do with the validity of his opinions on AI or anything else. If you can't separate the two, you're a major moron.
3. His opinions about AI, and the likely consequences, are probably *right.*
Yeah, of course, the idea that he can assign a specific probability to it is fairly ridiculous, but who cares?
If you know much about the issue, you recognize that there's an excellent chance Mankind will not survive the advent of AGI (or whatever goofy acronym or label you prefer), and no one's really doing anything about it, and no one is *going* to do anything with any real chance of preventing it.
Regardless of what you think of his business prospects, or accounting practices, or personality, etc., he's smart enough to recognize this, and cares enough to try and use his position to raise consciousness about it.
In both respects, he beats the hell out of most of you pinheads.
Then why progress with it if that is the likely conclusion? Is progress worth risking the destruction of humans?
Comes down to greed I guess. I have never understood the obsession with GDP and profit growth. If you get to a good position and stay at 0% then what is the problem.