Dataset that is collected from web among other sources?
Just like all of human intelligence?
Should humans score 100% because they are 'trained' on datasets from the web as well?
Pretty sure the BAR isn't open book. But I get what you're arguing. That ai is like a human in the sense that it has to learn and retain info (which isn't the same but whatever). The only thing ai has going for it is it is faster and it "remembers" better. It really isn't doing that much more than ctrl+f. it already has a leg up on the human brain in the sense that it doesn't push info out to make room for more. Unless, of course, a virus is introduced.
This is like showing an 8-year old kid making a half-court shot. Looks impressive until you realize that it was pure luck and that the hundreds of other similar attempts ended in failure.
Idk, but 3 of us 6 were engineers, so there you go. The others were a philosophy PhD, a history MA, and the one I considered bottom-shelf was a liberal arts major from a private college.
So, I have no idea—but, back in the day, I suspect more than a few.
Standardized tests are easy to ace, it just takes dedication, study, and training—LOL like distance running! IMO not much intelligence required. Same with the lower echelons of the hard sciences, and math, such as AP exams and undergrad level, even now masters level.
What metrics would you use to separate you and your friends intelligence from GPT-4 then? Certainly exams don't work.
LLMs will be better than the bottom 50% of lawyers shortly. That is a lot of 'knowledge economy' jobs on the line.
Exactly, what defines “intelligence?” If your definitions start with or are based on “human” then idk how there can be a meaningful conversation.
Should humans score 100% because they are 'trained' on datasets from the web as well?
Pretty sure the BAR isn't open book. But I get what you're arguing. That ai is like a human in the sense that it has to learn and retain info (which isn't the same but whatever). The only thing ai has going for it is it is faster and it "remembers" better. It really isn't doing that much more than ctrl+f. it already has a leg up on the human brain in the sense that it doesn't push info out to make room for more. Unless, of course, a virus is introduced.
This is false since the number of parameters, despite being massive, is far less than the sum of human knowledge, yet it recapitulates general knowledge well. So it is doing something akin to “learning” rather than just storing and retrieving. How different is it to human learning and reasoning? Hard to say isn’t it? It’s very black-box-esque. But I worry that lots of arguments simply rest on previous computers about computer knowledge.
As others noted, 90th percentile isn’t that impressive, it’s about a 165 which is far below the median at all top schools. It probably missed about 20 questions.
Should humans score 100% because they are 'trained' on datasets from the web as well?
Pretty sure the BAR isn't open book. But I get what you're arguing. That ai is like a human in the sense that it has to learn and retain info (which isn't the same but whatever). The only thing ai has going for it is it is faster and it "remembers" better. It really isn't doing that much more than ctrl+f. it already has a leg up on the human brain in the sense that it doesn't push info out to make room for more. Unless, of course, a virus is introduced.
A 'thing' that can recall information far better and faster than humans and can now breeze through relatively advanced tests of reasoning competency is great evidence of transformative impact.
Large parts of very skilled jobs are open to automation now or in the reasonable future (a few years).
You make a great point about the risks of these models out in the wild -- how vulnerable will they be to malicious actors (for one)?
As others noted, 90th percentile isn’t that impressive, it’s about a 165 which is far below the median at all top schools. It probably missed about 20 questions.
GPT4 performs as well as an elite high school student bound for a top 20 college across a wider range of academic disciplines than that hypothetical student could master. I think that's pretty impressive.
As others noted, 90th percentile isn’t that impressive, it’s about a 165 which is far below the median at all top schools. It probably missed about 20 questions.
GPT4 performs as well as an elite high school student bound for a top 20 college across a wider range of academic disciplines than that hypothetical student could master. I think that's pretty impressive.
I didn’t go to a top 20 undergrad and I would put up my scores across any of the SAT, LSAT, GRE, GMAT, or CFA exams against what that little vacuum cleaner brain could muster.
GPT4 performs as well as an elite high school student bound for a top 20 college across a wider range of academic disciplines than that hypothetical student could master. I think that's pretty impressive.
I didn’t go to a top 20 undergrad and I would put up my scores across any of the SAT, LSAT, GRE, GMAT, or CFA exams against what that little vacuum cleaner brain could muster.
Sure what about BC Calc, AP Chem, AP Physics, US Biology competition exams, etc?
Remember, GPT-4 wasn't trained specifically on these exams, this performance fell out of the general training on large corpuses of text.
I am not claiming these models are nearly as competent as humans but being able to pass a wide range of advanced written exams speaks to their abilities at information recall and (yes) reasoning.
GPT4 performs as well as an elite high school student bound for a top 20 college across a wider range of academic disciplines than that hypothetical student could master. I think that's pretty impressive.
I didn’t go to a top 20 undergrad and I would put up my scores across any of the SAT, LSAT, GRE, GMAT, or CFA exams against what that little vacuum cleaner brain could muster.
Yeah, but you cost 200k/year to employ and GPT is free, so...
I didn’t go to a top 20 undergrad and I would put up my scores across any of the SAT, LSAT, GRE, GMAT, or CFA exams against what that little vacuum cleaner brain could muster.
Sure what about BC Calc, AP Chem, AP Physics, US Biology competition exams, etc?
Remember, GPT-4 wasn't trained specifically on these exams, this performance fell out of the general training on large corpuses of text.
I am not claiming these models are nearly as competent as humans but being able to pass a wide range of advanced written exams speaks to their abilities at information recall and (yes) reasoning.
I didn’t take the science exams, as I took the local university physics rather than AP physics my senior year and was at a rural high school without other AP sciences, but I entered college with like 21 AP credits (including BC calc) and was a math major. I also performed very well in math competitions from middle school to college (Putnam exam).
I do understand the fascination, but my grandparents’ computer chess program (on MS-DOS) could beat me back in the early 90s and I’m sure over 90% of other people as well, but it wasn’t until Deep Blue beat Kasparov that people started making a big deal about it. I’ve seen the text that this program produces and while very logical and coherent it kind of feels sterile; will it ever be able to produce prose or be creative at the level of great authors? I doubt it, even many decades from now. Machines can be much better than humans in many technical tasks even beyond rote computation, but in creativity and true holistic reasoning I don’t think they will ever be a match for the human brain.
Why are we supposed to be impressed that something working off an archive of essentially all the information on the internet is able to pass standardized tests? It should have gotten a perfect SAT and ACT score at least (I’m not 100% sure how the Bar exam is scored but I’m sure with all of the legal knowledge ever put on the internet 90th percentile is pretty pathetic)
If you were to provide a fair comparison in which the LSAT candidates also have full access to any information they want on the bar exam, would the AI program still beat most of them?
attorney is basically a glorified paper pusher, it's a fluff job like an administrative assistant. Of course it'll be one of the first jobs that AI can make redundant. Thank god. hate lawyers.