The thing is that coming up with the correct input is part of the task that requires intelligence. You're basically doing some of the computer's work for it by thinking of what input to type. Most humans don't need to be given input in a very specific way in order to perform a task.
It's just like saying that you can beat a chess master even though you're just an average player. And when someone asks you how, you say "I need the correct input from a grandmaster. I can physically move the pieces myself and play the opening moves, but the grandmaster needs to tell me which piece to move to which square during most of the game."
Computers can't do math. They can do computations. Give them an algorithm and they can solve some problems, if they can set them up correctly.
Ask this Chat GPT to solve the infamous crocodile problem without using calculus. If it it actually tries, it will probably cheat by trying to find someone else's answer online, such as mine.
output all elements of its decision making process and you'll surely debunk the aura of original thought it manages to project. This is a monkey that goes around asking your question to lots of people and tries to parse their answers into a coherent response, that's all.
This post was edited 1 minute after it was posted.
bots still can't even tell you which pictures have motorcycles in them.
I've passed two bar exams (CA then IL) after being from a state where we waived in following graduation. I can confidently say that the singular thing tested on bar exams is the ability to endure a few hundred hours (or less) of structured studying and recall facts/patterns. There is very little analytical thinking involved. Great step forward for AI, but this won't replace higher-end lawyers any time soon.
Eh, an issue with these sorts of tests is that they weight questions relatively equally; a few absolutely absurd answers to questions that demonstrate fundamental misunderstanding will get you fired pretty quickly but don't hurt too much on an exam.
At least chat-gpt still gives absurd answers of this type (for example, the latest few tweets ) and will even double down on its nonsense (). I kinda doubt that a bit more optimization/training will handle this problem, but idk, maybe 4 will knock my socks off.
Sure, if you are hoping to slot in an LLM as an exact replacement for a human, I imagine you will run into a host fundamental issues.
Most people aren't claiming that LLMs are going to put everyone sitting in front of a computer out of work in the next year or 5. The argument is that they do a lot of work as good or better than the average human (and the quality of this work has increased dramatically in the last 2 years). If you know the strengths and weaknesses, you can design a role to maximize use the impressive 'intelligence' of these systems -- and likely save yourself some payroll costs.
I'm still skeptical that there will be anything but extremely niche instances where you can "design a role to maximize the impressive 'intelligence' of these systems," or at least in the next few years and instances that aren't already well handled by existing tech (automated help centers, grammar/spelling checks, checking the drop down info on a google search, etc).
As you describe, the claims seem to be that this will be a tool for greatly increasing productivity by leveraging the intelligence of these systems, but if there are anything beyond novelty projects completed by "working with" these tools in the next couple years then I'll be flabbergasted.
I cheated on a test using chat gpt on a test and got a 70%, it’s not that great.
a friend of mine works in the PR department for a top 10 university, their boss is always talking about chat gpt and ai taking jobs. The employees asked chat gpt to write an article for them and it started it off its “once upon a time”. AI Has a long way to go.
I think youre greatly underestimate the speed of improvement of these LLMs, and the ability to fine tune them to specific domains. GPT4 + fine tuning for say "marketing content" will, I predict, be a formidable force.
Time will tell, but technopessimist-naysayers are almost always wrong in the long run - this isnt fusion power!
ChatGPT already has an impact by streamlining previously basic and time-consuming actions, which are drags on productivity. For instance, It's ability to write basic code scaffolds and deploy unfamiliar libraries really speeds up previously time consuming tasks. I am not just hypothesizing, there are data:
Even non-coding-based jobs seem to have tangible benefits:
We examine the productivity effects of a generative artificial intelligence technology—the assistive chatbot ChatGPT—in the context of mid-level professional writing tasks. In a preregistered online experiment, we assign occupation-specific, incentivized writing tasks to 444 college-educated professionals, and randomly expose half of them to ChatGPT. Our results show that ChatGPT substantially raises average productivity: time taken decreases by 0.8 SDs and output quality rises by 0.4 SDs. Inequality between workers decreases, as ChatGPT compresses the productivity distribution by benefiting low-ability workers more. ChatGPT mostly substitutes for worker effort rather than complementing worker skills, and restructures tasks towards idea-generation and editing and away from rough-drafting. Exposure to ChatGPT increases job satisfaction and self-efficacy and heightens both concern and excitement about automation technologies.
Just like Watson put doctors out of business. Passing an exam based on aggregating historical information is one thing. Briefing and arguing a case with novel facts and circumstances is quite another circumstance.
Exactly. "Knowledge" is the easiest trait for information technology to outperform humans.
The safest white-collar roles are those that have one or more of the following characteristics:
* Non-remote (Duties are performed in-person)
* Requires a lot of human-to-human coordination, negotiation, salesmanship, etc.
* Highly strategic, using a lot of intuition as opposed to just data
* Requires abstract thought
Technologies like GPT could reduce the barriers to entry into other white-collar roles. The reason some particular white-collar role might be lucrative is because it requires a lot of post-HS education, and because of that, few people want to invest that much time and money into the career track, making qualified people highly sought after ($$$) if the role is in-demand. But what if someone with just basic 101 knowledge, with the help of ChatGPT, could perform the role just as effectively as someone with 4+ years of education?
It wouldn't happen because that person can't tell what works and what doesn't.
There was a thread not too long ago about ChatGPT designing a marathon program for someone who had a goal time of 2:45. Someone with just basic 101 knowledge will say "Lots of mileage? Check. Some progression and a taper? Check. Some faster runs in the form of strides and race-pace work? Check. OK, good to go, now follow it!"
But an experienced runner will quickly see that it's absolute nonsense:
I asked to build an 18 week training block with a goal time of 2:45:00. This is what it spit out…Great goal! Here's an 18-week marathon training block that will help you achieve a finish time of 2:45:00:Week 1: Monday...
I used that example since it's something that a lot of people here can relate to.
At work, it'll be something like "design this widget/bridge/program". Someone with just basic knowledge and ChatGPT wouldn't be able to tell whether the output given is a good design or not. But someone with more experience can see the details - is there a cheaper way to get the same product? How durable is it? If it's tweaked a certain way, will it become more desirable? And so on.