A running coach is not really in the category of the roles I was alluding to.
I used that example since it's something that a lot of people here can relate to.
At work, it'll be something like "design this widget/bridge/program". Someone with just basic knowledge and ChatGPT wouldn't be able to tell whether the output given is a good design or not. But someone with more experience can see the details - is there a cheaper way to get the same product? How durable is it? If it's tweaked a certain way, will it become more desirable? And so on.
You vastly overestimate the level of expertise at many jobs! Lots of "knowledge" are still a lot of pure generation and busy work, these things can perform just as well or better than humans. It's been shown over and over in these threads. They are not going to be replacing executives any time soon, but wrote intern labor? You bet.
What do you do? You seem very protective of this kind of work.
I used that example since it's something that a lot of people here can relate to.
At work, it'll be something like "design this widget/bridge/program". Someone with just basic knowledge and ChatGPT wouldn't be able to tell whether the output given is a good design or not. But someone with more experience can see the details - is there a cheaper way to get the same product? How durable is it? If it's tweaked a certain way, will it become more desirable? And so on.
You vastly overestimate the level of expertise at many jobs! Lots of "knowledge" are still a lot of pure generation and busy work, these things can perform just as well or better than humans. It's been shown over and over in these threads. They are not going to be replacing executives any time soon, but wrote intern labor? You bet.
What do you do? You seem very protective of this kind of work.
What do I do? How about not using the product since it's often more trouble than it's actually worth?
It's like getting high school kids to work on an engineering job. They're definitely cheaper, and once in a while, they'll produce a somewhat acceptable product if you guide them to it. But usually, the time you'd spend guiding them and fixing their mistakes is more than the time you'd spend if you were to do the work yourself.
You vastly overestimate the level of expertise at many jobs! Lots of "knowledge" are still a lot of pure generation and busy work, these things can perform just as well or better than humans. It's been shown over and over in these threads. They are not going to be replacing executives any time soon, but wrote intern labor? You bet.
What do you do? You seem very protective of this kind of work.
What do I do? How about not using the product since it's often more trouble than it's actually worth?
It's like getting high school kids to work on an engineering job. They're definitely cheaper, and once in a while, they'll produce a somewhat acceptable product if you guide them to it. But usually, the time you'd spend guiding them and fixing their mistakes is more than the time you'd spend if you were to do the work yourself.
That’s not what controlled studies of its use in software development say but… sure, you’re entitled to your opinion. Just know history isn’t on your side.
-Copywriters (really any career that relies solely on the written word)
-Many attorneys (doc review can be done by AI for free with basically 100% accuracy, AI has demonstrated a very strong ability to logically reason within a legal context)
-Most software developers - CPT 4 produced bug-free code based on natural language inputs
-Anyone in the white collar "knowledge economy" is severely at risk now.
Real-world jobs that can't be done from home might be safe for a while.
I hope someone bumps this in 5 years. You’ve gone off the hype deep end.
A significant portion of the hype is driven by tech bros who lost their money in crypto and are hoping to regain some of it from stock in AI-based companies. The remaining portion is driven by people who fantasize about AI taking everyone's jobs so that everyone can be given an income to sit around and do nothing.
Meanwhile, in the real world, the unemployment rate is near a 50-year low, and the few layoffs that are occurring are mainly concentrated in the tech sector.
As the superintelligent AI disassembles us into paperclips, people will still be shouting 'it's only predicting next word.....'
How would this even work? ChatGPT, GPT4, and all of its iterations are programs. They have no access to matter and cannot physically manipulate matter to turn it into anything.
Here's a series of tasks that most 16-year olds would be able to do that an AI would have no hope of achieving anytime within the next 16 years:
- Tell the AI to go to the nearest farm and to spend a few hours picking crops (we'll make the assumption that the farmer is OK with this).
- Tell the AI to go to the nearest farmer's market and to sell the crops that it picked from the farm. The crops should be sold at a price that will likely ensure success on the next step.
- Tell the AI to go to the nearest grocery store and to buy groceries that are sufficient for a day's worth of reasonably nutritious meals for two people.
- Tell the AI to return the remaining cash to the farmer and to cook those meals in a nearby home (again, we'll make the assumption that the homeowner is OK with this)
- Tell the AI to give some of the cooked food to the homeowner and to wash the homeowner's dishes once they finish the meal.
Seems fairly straightforward, but there are multiple steps where a non-human would fail. Even if you could get the AI to physically pick crops, that same AI can't lift grocery items off a shelf and purchase them (assuming it had the money), and it certainly can't physically cook anything. The last few steps are simple, but it would fail here too - it cannot physically take the plate and utensils off the homeowner's table, load them into the dishwasher, and turn the dishwasher on.
And you expect it to turn everyone and everything into paperclips, or even the much lower ambition of taking plumber's jobs? No way in hell that happens before 2050.
As the superintelligent AI disassembles us into paperclips, people will still be shouting 'it's only predicting next word.....'
How would this even work? ChatGPT, GPT4, and all of its iterations are programs. They have no access to matter and cannot physically manipulate matter to turn it into anything.
Here's a series of tasks that most 16-year olds would be able to do that an AI would have no hope of achieving anytime within the next 16 years:
- Tell the AI to go to the nearest farm and to spend a few hours picking crops (we'll make the assumption that the farmer is OK with this).
- Tell the AI to go to the nearest farmer's market and to sell the crops that it picked from the farm. The crops should be sold at a price that will likely ensure success on the next step.
- Tell the AI to go to the nearest grocery store and to buy groceries that are sufficient for a day's worth of reasonably nutritious meals for two people.
- Tell the AI to return the remaining cash to the farmer and to cook those meals in a nearby home (again, we'll make the assumption that the homeowner is OK with this)
- Tell the AI to give some of the cooked food to the homeowner and to wash the homeowner's dishes once they finish the meal.
Here's what would happen:
1. The tech bro gives some crypto to the AI.
2. The AI posts an online ad saying "Do task X in the physical world, and get Y amount of crypto."
3. Some desperate soul accepts the offer and does the task.
End result? The AI merely shuffles money around, and a human still needs to do the work. So no, an AI won't take your job.
As the superintelligent AI disassembles us into paperclips, people will still be shouting 'it's only predicting next word.....'
How would this even work? ChatGPT, GPT4, and all of its iterations are programs. They have no access to matter and cannot physically manipulate matter to turn it into anything.
Here's a series of tasks that most 16-year olds would be able to do that an AI would have no hope of achieving anytime within the next 16 years:
- Tell the AI to go to the nearest farm and to spend a few hours picking crops (we'll make the assumption that the farmer is OK with this).
- Tell the AI to go to the nearest farmer's market and to sell the crops that it picked from the farm. The crops should be sold at a price that will likely ensure success on the next step.
- Tell the AI to go to the nearest grocery store and to buy groceries that are sufficient for a day's worth of reasonably nutritious meals for two people.
- Tell the AI to return the remaining cash to the farmer and to cook those meals in a nearby home (again, we'll make the assumption that the homeowner is OK with this)
- Tell the AI to give some of the cooked food to the homeowner and to wash the homeowner's dishes once they finish the meal.
Seems fairly straightforward, but there are multiple steps where a non-human would fail. Even if you could get the AI to physically pick crops, that same AI can't lift grocery items off a shelf and purchase them (assuming it had the money), and it certainly can't physically cook anything. The last few steps are simple, but it would fail here too - it cannot physically take the plate and utensils off the homeowner's table, load them into the dishwasher, and turn the dishwasher on.
And you expect it to turn everyone and everything into paperclips, or even the much lower ambition of taking plumber's jobs? No way in hell that happens before 2050.
I can't tell if you are making a joke? Obviously language models cant harvest crops?
Crop harvesting for most row crops was automated about 20 years ago in industrialized countries though. So I dont understand what you mean by "in the next 16 years."
You seem to think that an AI will have to replace all human abilities in order to be intelligent, that seems like an odd claim? Plenty of humans are physically incapable of many things and we still consider them intelligent.
The paperclip joke is a meme. I believe Harambe is referencing the idea of a true "super intelligence" that can do things far outside the human cognitive bounds.
While no one says current iterations of AI are even close to surpassing human general intelligence, thinking about what something an order of magnitude (or more) more intelligent than a human could do is a fun line of thinking.
I hope someone bumps this in 5 years. You’ve gone off the hype deep end.
A significant portion of the hype is driven by tech bros who lost their money in crypto and are hoping to regain some of it from stock in AI-based companies. The remaining portion is driven by people who fantasize about AI taking everyone's jobs so that everyone can be given an income to sit around and do nothing.
Meanwhile, in the real world, the unemployment rate is near a 50-year low, and the few layoffs that are occurring are mainly concentrated in the tech sector.
The hype is mostly due to the extremely rapid progress in LLMs. No one really predicted a AI would be able to score very highly on a wide range of standardized tests, or write applications worth of function code from mere descriptions, 3 years ago... so rapidly. The seemingly exponential increase in machine intelligence is certainly worthy of hype.
Imagine a dude going 3:15 for the 1500m and then 3:05 12 months later.
The hype is mostly due to the extremely rapid progress in LLMs. No one really predicted a AI would be able to score very highly on a wide range of standardized tests, or write applications worth of function code from mere descriptions, 3 years ago... so rapidly. The seemingly exponential increase in machine intelligence is certainly worthy of hype.
Imagine a dude going 3:15 for the 1500m and then 3:05 12 months later.
Can't tell what you meant on that last sentence. A 3:15 1500m is a superhuman performance, and AI is not even close to being human-level, let alone superhuman-level, in most fields.
If you meant a 3:15 1000m and 3:05 1000m 12 months later, then that's just slow progress, just like AI's slow progress the past 12 months. LLMs couldn't do 4-digit math then, it can't do 4-digit math now (as evidenced by yesterday's 3480m in 9 minutes = 6.11m/s and not 6.44m/s example)
How would this even work? ChatGPT, GPT4, and all of its iterations are programs. They have no access to matter and cannot physically manipulate matter to turn it into anything.
Here's a series of tasks that most 16-year olds would be able to do that an AI would have no hope of achieving anytime within the next 16 years:
- Tell the AI to go to the nearest farm and to spend a few hours picking crops (we'll make the assumption that the farmer is OK with this).
- Tell the AI to go to the nearest farmer's market and to sell the crops that it picked from the farm. The crops should be sold at a price that will likely ensure success on the next step.
- Tell the AI to go to the nearest grocery store and to buy groceries that are sufficient for a day's worth of reasonably nutritious meals for two people.
- Tell the AI to return the remaining cash to the farmer and to cook those meals in a nearby home (again, we'll make the assumption that the homeowner is OK with this)
- Tell the AI to give some of the cooked food to the homeowner and to wash the homeowner's dishes once they finish the meal.
Seems fairly straightforward, but there are multiple steps where a non-human would fail. Even if you could get the AI to physically pick crops, that same AI can't lift grocery items off a shelf and purchase them (assuming it had the money), and it certainly can't physically cook anything. The last few steps are simple, but it would fail here too - it cannot physically take the plate and utensils off the homeowner's table, load them into the dishwasher, and turn the dishwasher on.
And you expect it to turn everyone and everything into paperclips, or even the much lower ambition of taking plumber's jobs? No way in hell that happens before 2050.
I can't tell if you are making a joke? Obviously language models cant harvest crops?
Crop harvesting for most row crops was automated about 20 years ago in industrialized countries though. So I dont understand what you mean by "in the next 16 years."
You seem to think that an AI will have to replace all human abilities in order to be intelligent, that seems like an odd claim? Plenty of humans are physically incapable of many things and we still consider them intelligent.
The point is that machines are extremely limited in what they can do in the physical world. The same machine that harvests crops can't wash dishes. The same human that harvests crops can definitely wash dishes and do millions of other things. Heck, even if you completely ignore the physical world, ChatGPT cannot learn new things and come up with its own ideas. Ask it to play chess or any other board game, and the first few moves will actually be quite good since there's lots of info on the Internet on the opening moves. But once you get past the opening and into positions that nobody has encountered, it moves randomly and plays worse than complete beginners.
automatic harvester wrote:
The paperclip joke is a meme. I believe Harambe is referencing the idea of a true "super intelligence" that can do things far outside the human cognitive bounds. While no one says current iterations of AI are even close to surpassing human general intelligence, thinking about what something an order of magnitude (or more) more intelligent than a human could do is a fun line of thinking.
It's certainly fun to think about, and there's even a game where you can make those clips. But you have to be realistic and admit that the possibility of a super intelligence taking over the world within the next 1000+ years is about the same as the possibility of someone running a sub 3 minute mile. It's in the same league as Marvel superheroes. Fun stuff to talk about, but complete fantasy.
I can't tell if you are making a joke? Obviously language models cant harvest crops?
Crop harvesting for most row crops was automated about 20 years ago in industrialized countries though. So I dont understand what you mean by "in the next 16 years."
You seem to think that an AI will have to replace all human abilities in order to be intelligent, that seems like an odd claim? Plenty of humans are physically incapable of many things and we still consider them intelligent.
The point is that machines are extremely limited in what they can do in the physical world. The same machine that harvests crops can't wash dishes. The same human that harvests crops can definitely wash dishes and do millions of other things. Heck, even if you completely ignore the physical world, ChatGPT cannot learn new things and come up with its own ideas. Ask it to play chess or any other board game, and the first few moves will actually be quite good since there's lots of info on the Internet on the opening moves. But once you get past the opening and into positions that nobody has encountered, it moves randomly and plays worse than complete beginners.
automatic harvester wrote:
The paperclip joke is a meme. I believe Harambe is referencing the idea of a true "super intelligence" that can do things far outside the human cognitive bounds. While no one says current iterations of AI are even close to surpassing human general intelligence, thinking about what something an order of magnitude (or more) more intelligent than a human could do is a fun line of thinking.
It's certainly fun to think about, and there's even a game where you can make those clips. But you have to be realistic and admit that the possibility of a super intelligence taking over the world within the next 1000+ years is about the same as the possibility of someone running a sub 3 minute mile. It's in the same league as Marvel superheroes. Fun stuff to talk about, but complete fantasy.
I suspect the same tools that make LLMs excellent at aquirement general language parsing, comprehension, and generation abilities are not that different from tools that can make them very good at many things and combining those abilities will not be that difficult. We are capable of generating unfathomable amounts of training data these days.
But.... I dont think we can ever possibly agree if you are really going to make the 1000 yr claim?
Seems just like bombastic spewing. 1000 years? We've seen exponential technological revolution in the last 150! Technopessimism is impossible to get through. Some people just want to naysay and naysay.
I’m still here waiting for all the robotaxis Harambro and 2600 told me would replace 90% of cars in my city by 2025.
Well, boys. It’s 2023 and it isn't even close to happening. Are we still on target?
One of two things will happen:
1.) They'll deny making that claim and hope that everyone forgot about it.
2.) They'll double down, say "Exponential growth! Get back to me in December 2025!"
If the AI bubble bursts and they find something else to put their money in, it'll be option 1. If it's still going, it'll be option 2.
Idk man. Look at how good GPT-3 was vs GPT-2. Look how good GPT-4 is vs GPT-3 at very good correlative measures of intelligence (exams). The rate of change is impressive. Exponentials always look shallow until they shoot past you…
The city’s transportation officials sent letters this week to California regulators asking them to halt or scale back the expansion plans of Cruise and Waymo.
If you were to provide a fair comparison in which the LSAT candidates also have full access to any information they want on the bar exam, would the AI program still beat most of them?
It didnt have access to information any more than seeing something and having it in your brain counts as "access" to information.
There are some seriously bad efforts to re-define intelligence here.
In other words, the AI program had access to an entire enormous Internet library of information to work with and could almost instantaneously search all of it for what it needed. So, for a fair comp, you'd have to allow a human the same search library for use on the LSAT. Or, you can deny any of that library to the AI program and see how it does with no knowledge at all. I bet it fails.
The program 'learns' with a great deal of data. That data is essentially written in to it, like a different form of book in which it stays there and can be searched for very quickly. 'It' does not know that information. It looks it up. Humans store bio-chemical memory in their brains. That is what they 'know'. They also have to remember stuff that is not as present to them. Then they think of how to figure things out and adapt to new things. That's the part we think of as intelligence. We also access written information. So, to compete we would need open book and computer search as well. Then it would be comparable. I bet most LSAT takers beat AI on this type of test--granting, however, that their speed would be much worse.
The program 'learns' with a great deal of data. That data is essentially written in to it, like a different form of book in which it stays there and can be searched for very quickly. 'It' does not know that information. It looks it up. Humans store bio-chemical memory in their brains. That is what they 'know'. They also have to remember stuff that is not as present to them. Then they think of how to figure things out and adapt to new things. That's the part we think of as intelligence. We also access written information. So, to compete we would need open book and computer search as well. Then it would be comparable. I bet most LSAT takers beat AI on this type of test--granting, however, that their speed would be much worse.
Actually, the LSAT does not test your encyclopedic knowledge, it’s more of a logical reasoning test (logic games, logical reasoning, and reading comprehension). Giving people access to the internet or whatever wouldn’t really help someone’s score. I took the exam almost 15 years ago and missed 3-5 questions (based on my score, I took it overseas which didn’t allow us to receive the full report after). ChatGPT would have missed about 20 questions so it still has a long way to go.