On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at tri...
I didn't read through all of that, but like I said modern "AI" is pure prediction. Maybe you think that the causal reasoning that humans do is also "pure prediction", but I don't think it is. We have causal models of how the world works, which allows us to navigate novel situations. Current iterations of AI can't do that, and I don't think any number of parameters will allow them to do that.
The calculator replaced human calculators from days of old. But it did not replace engineers. Or accountants. It changed these professions.
BTW, I taught MATLAB this past fall, and did just fine. I gave engineering students really challenging and interesting projects - like using eigenvectors to solve a linear, 1st order system of DE's - and had them write the code to fully automate the solution, given initial values, parameters, etc. I wrote part of the code for them, and had them finish it. Not a single problem with students using AI. Another instructor gave students trivial projects, and consistently struggled to stop students from using AI to write code.
Simpleton accountants that can only fill out forms will no doubt be jettisoned. Good accountants that can solve difficult problems will always be in demand.
Here is the way tech has worked in the past. Routine stuff is eliminated. More work is created on the other end of things.
you are trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and unfortunately all of your mistakes have failed to cancel out
I have closely followed the Russian invasion of Ukraine every day since russian armor rolled into Donestk. The profiliation of drones in combat is horrifying, but it’s only the beginning.
Hordes of drones have become the norm for war- in response, Electronic warfare is ramped up to the point where the only way you can reliably get your drone to be successful is to give it an Ai brain and no way of listening to EMF signals around it (that could be corrupted)
But swarms of small uncontrollable death drones isn’t even the worst part.
Eventually, they will get advanced enough to self-replicate and travel farther distances with advanced batteries. And get smaller. Within a century or two we could have swarms of locust sized death bugs that can travel continents and are entirely invulnerable to all known forms of weaponry, due to their numbers. And maybe, just maybe, countries will try and stop the spread of this tech (like they did with Nukes) but rouge actors like Russia, China, iran, etc will still advance the tech.
Humanity is doomed. It’s worse than nukes. And becoming interplanetary won’t save us either. Imagine a spacecraft missile the size of a mars rover you launch at another planet full of tiny, extremely intelligent, robust, AI drone bugs. All they do is replicate, within hours a swarm of millions will be created. Planet sterilized. Rinse and repeat.
This is our final “Great Filter”. The only way to prevent this would be to completely lock down AI and robotic development. Maybe we need a “Cult of Serena” (anti-technology movement in the Dune novels)
Yet another probable "computer-science" know-it-all.
How exactly is a battery-powered drone expected to self-replicate and still operate efficiently and effectively? You are going to include a portable manufacturing plant in the drone, and a way to acquire the right new materials, and... wasting my time here
I realize it's page 3 or 4 now, but you need to be called morons. Study a REAL science, or math, like normal smart people.
He’s probably referring to replacement of the software code originally loaded by humans for autonomous operation, with a more dangerous altered version, that is then replicated and passed on through the swarm of autonomous drones.
I have closely followed the Russian invasion of Ukraine every day since russian armor rolled into Donestk. The profiliation of drones in combat is horrifying, but it’s only the beginning.
Hordes of drones have become the norm for war- in response, Electronic warfare is ramped up to the point where the only way you can reliably get your drone to be successful is to give it an Ai brain and no way of listening to EMF signals around it (that could be corrupted)
But swarms of small uncontrollable death drones isn’t even the worst part.
Eventually, they will get advanced enough to self-replicate and travel farther distances with advanced batteries. And get smaller. Within a century or two we could have swarms of locust sized death bugs that can travel continents and are entirely invulnerable to all known forms of weaponry, due to their numbers. And maybe, just maybe, countries will try and stop the spread of this tech (like they did with Nukes) but rouge actors like Russia, China, iran, etc will still advance the tech.
Humanity is doomed. It’s worse than nukes. And becoming interplanetary won’t save us either. Imagine a spacecraft missile the size of a mars rover you launch at another planet full of tiny, extremely intelligent, robust, AI drone bugs. All they do is replicate, within hours a swarm of millions will be created. Planet sterilized. Rinse and repeat.
This is our final “Great Filter”. The only way to prevent this would be to completely lock down AI and robotic development. Maybe we need a “Cult of Serena” (anti-technology movement in the Dune novels)
Nukes: 0 possible net positive impacts
AI: solved protein folding (opening many doors for drug discovery RL with the bottleneck being human clinical trials), huge improvements wrt shrinking the search space of materials science, literally a Rosetta Stone for language and is currently helping us decipher long lost scrolls that have been preserved as nothing but carbonized papyrus (basically ash cylinders), can act as a real-time babel fish to translate any know language of the world in a matter of seconds. +++ sooooo much more otw with robotics, ai that controls plasma with magnets (an essential technique otw to fusion energy), …..
obviously ai is gonna be used for a lot of terrible things as well, but the tech itself is neutral, unlike a bomb that can only be conceived for destruction. I for one anticipate the first robotic 4 minute mile.
Generics such as Google Gemini, Chat GPT, Apple Ferret, etc. are provided to the public for widespread use.
If you want a specific system to work for your application, say pickleball, you'll need a team of extreme AI experts to piece the open source together, train and test it for you.
You'll need to regularly repeat the process as algorithms, hardware, neural networks, data sets, etc. evolve.
Only the CIA, NFL, Sony Pictures, etc can afford that kind of talent you specify.
The rest of us are stuck with vanilla Chat GPT, Adobe Firefly, Apple Ferret, IBM Watson.
Even then popular AI has wiped out jobs at National Geographic, Sports Illustrated, Los Angeles Times, NCAA Media, etc.
The tone of the OP is the real problem. He goes from acknowledging that he “used to be a doubter” to accusing those who currently doubt as being “uneducated morons”.
AI is getting very close to being able to solve abstract problems without using the material is was trained on. This is the key point. Soon it will longer relies on data inputs from humans, it will be able to reason on its own.
it's impossible for users to train and test their own data. 99% of us got a C- in word problems in hs algebra but easily aced the hs algerbra substitution problems. how in the f do you expect users to customize AI on their own to make AI useful? the public Google Gemini, public Chat GPT don't allow users to train and test at all. You have to find a engineering student or pre-med student to do it for you.
Case in point - this morning, I had an idea for a simple fitness app. I told GPT-4 in plain English what I wanted it to do and it wrote the code for me and all I had to do was deploy it. No coding required.
Let me repeat - it wrote the code and created the app for me.
^I think this is interesting. Can you please post a link to the app, for evaluation?
The most telling part of this entire thread is that this question never received a response.
For all of the hype, I have yet to see anything beyond simple "write me an algorithm to do xyz"-level programming come out of an AI tool, and even then it needs to be carefully reviewed by someone who knows what they're doing. Current AI building a full application, especially one that looks ok and is intuitive to use or integrates with other platforms, is laughable.
A I recently decoded the 2000 year old Vesuvius scrolls chared by volcanic ash from Mt Vesuvius. The scrolls were written by the Romans before Rome was conquered by the Barbarians, aka Huns from Xiongnu north of China.
AI isn’t as good as you make it seem. For example, Sora can make great B roll footage when you just want a drone shot. But what if you wanted to make a customer testimonial video? You still need to fly out to the customers location, plan the script, get talented videographers and lighting equipment and editors. Not to mention the intangible aspects of a video that tells a story and communicates the exact message that your company wants to convey to other customers.
you cant write a prompt in sora that says “make a customer testimonial video of X customer and make it look like it’s that person talking.” It’s also not iterative yet. For example if you feed it a prompt and it doesn’t give you what you want, you can’t control it and modify it if you were say, a highly skilled after effects specialist.
the only thing that sora replaces is stock footage that you might purchase on shutterstock.
I think this is pretty close to my take. The thing that had to sink in with Sora is that this is as bad as it's ever gonna be, ever again. It's so far past the uncanny valley that I was shocked. A lot of those videos, I wouldn't have been able to tell were AI unless I was looking for it. Some of them, I don't think a human could tell if it was AI or not. And they show you videos that have mistakes, and that shocked me ever more. There was one where it looks like dogs are popping into existence out of thin air, and everything else in the video looks so realistic that it just looks like a good editing trick.
The other thing that's worrying is how big of a step we took in 1 year. A year ago, we were all laughing at that Will Smith eating spaghetti video, and now, there's never really a reason to buy B roll footage again.
I don't think we'll lose all videographers or animators or video editors, but Sora as a product is absolutely gamechanging to that field. I think it'll be similar to how musicians like Billie Eilish can blow up from their bedroom now, because they can run studio-quality software on their computer, and they can make studio quality music with a keyboard and a microphone. They didn't need to hire a producer, session musicians, or spend money renting a space to record, because they are the producers, session musicians, and they record in their bedroom.
Seeing what Sora can do, I don't think we're too far off of being able to upload a book, and getting back an animated movie of it in any animation style you want. I doubt it'll be a good adaptation, but I think it'd be much easier to refine that than to make a whole movie.
I didn't read through all of that, but like I said modern "AI" is pure prediction. Maybe you think that the causal reasoning that humans do is also "pure prediction", but I don't think it is. We have causal models of how the world works, which allows us to navigate novel situations. Current iterations of AI can't do that, and I don't think any number of parameters will allow them to do that.
you are trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and unfortunately all of your mistakes have failed to cancel out
I have taught AI and I think that mattioco is closer to the truth, especially with his reference to humans have causal models. However, that is only true for certain tasks. Some tasks require brute force where deep learning and LLMs have an advantage, but they are not reasoning or understanding cause and effect. Some tasks require pattern recognition such as predicting the next word in a sentence. There is a whole category of AI which was used from the 1950s to 2006 when deep learning arrived. It is also very important.
I think the best way to test your arguments is to use specific tasks for accomplishing a goal such as developing a self driving car or winning the NCAA basketball racket. So far your arguments are Al Sharpton kabuki dancing.
With technology, I work with S-curves which is separating the hype from the true state of engineering. When I get a cocky student or argumentative person I use the Socratic method and real world problems. For example, for self-driving cars explain how you think AI perceives and object and makes a decision or would solve the NCAA bracket. Let's see if you got it goin on or if you are bloviated gas bags.
AI basically killed videographers, drone companies, animators, video editors, photographers, graphic designers and a bunch of related careers overnight. There's no future in those careers; they have about 6-12 months left of viability.
Wrong.
Because society is not going to accept AI generated (all of the things you listed) in 6-12 months. 6-12 years, different conversation. WTF do you see happen in 6-12 months that massive? Nothing.
We've updated our BetterRunningShoes.com web site to make it easier to find good deals on the best shoes. To keep it great we need new shoe reviews from you.
Fill out a review to be entered into a drawing to win a free pair of shoes.