@friketje
He specifically talking about passive finite system trained generative AI, or generally speaking about them in the examples.
Equate to it: Gaming rasterization technology is reaching its limits. That doesn't mean gaming visuals wont continue to improve, just the current majority implementation of it is reaching the peak of what it can do alone. New other techs will advance it, but the current form is getting to the point of diminishing returns.
Yeah, it will already have a lasting impact in pretty much any industry you can imagine.
I don't think we're close to plateau-ing though, the AI revolution has only just begun. But because all models are based on math, I don't find it that far fetched that (some of) its limitations can be calculated as well.
Because all current models rely on training data, scaling is pretty much the only way to improve the results. And there will always be certain subjects that are over and under represented in training data, unless you train it in "everything". At a certain point, there will be so much data, the amount of energy and storage required to process it will become so immense that the diminishing improvements won't be worth the investments anymore.
In my opinion, generative AI will never reach the AGI/ASI stage.
There must have been a door there in the wall, when I came in.
Truly gone fishing.
The models will change. It's not all about the amount of data you put in to them. Google multi token prediction for instance.
Current AI models are kinda stupid. You'll need a high end gpu and all you get is some text that even takes time to proces. It's very inefficient. It will improve. New models will be more efficient, using better hardware, with more budget available (cause the better AI the more value) and at some stage AI can improve on it's own.
But saying the AI improvement is just about feeding it more data and training it more, that's just not true.
Last edited by friketje on Fri, 10th May 2024 20:51; edited 1 time in total
This is absolutely just the start and first whiff of what this will look like in the future. Companies aren't admitting it but AI one of the reasons why we're seeing so many mass layoffs.
From custom hentai art to killer robots for the military
Yea it certainly will advance. But the current gen of generative 'passive/dumb terminal' AI will only go so far before hitting a wall. As it never has any new info added to it that appears organically. It merely looks at a lot of stuff, and if/then/else heat pattern ratio vomits new (but limited to the input data) patterns based on all that stuff.
Whatever the next iteration of AI beyond what we have now of simply "take all this data, pattern group them, and give me a unique pattern based off it", will be the next step on the stairs to AGI.
The only thing I use AI for these days is Topaz Video AI to upscale, increase framerate and filter lower res/quality material like for instance bonus material from DVD's with shows from the 80'ies (Red Dwarf etc.) and 90'ies.
voice response time average of 310ms
that's pretty nuts
that was perhaps the biggest problem at the moment with ai voice communication (since ML basically solved the issue of "understanding"), that you gotta wait seconds after saying a thing to get a response, it's awkward and slow
Likot Mosuskekim, Woodcutter cancels Sleep: Interrupted by Elephant.
yeah, that was terrible, didn't feel futuristic at all. Tons of new videos on OpenAI's channel btw.
What a big win this must be for blind people! They can just point the mobile at whatever and it can describe the surroundings, read for them, warn them that the next step will be a 500 ft drop and not advisable etc a permanently mounted camera and they have both eyes and a permanent "friend"/assistant it's completely nuts that it's free. For now.
Sam Altman said they wanted to release GPT-5 in stages so people can get used to the new technology. This is probably exactly that, a preview of GPT-5.
Not fucked yet, its still static trained pattern finding (generative) based on finite data. No thinking or rationalization of its speech, or understanding what it says.
I myself dont see what the scare is, other than they added a synth voice to a pattern finding program that is simply regurgitating back applicable patterns, using the voice.
Not that it isnt impressive, but its not concerning either, yet anyway. GPT and all its similar forms in image, audio, and video that is all data based matrix points of limited pattern finding in only the scope of the data given. Is not the AI that is going to fuck us. Its only as 'dumb/smart' as the data dump you give it to selectively chicken peck out words limited to what it was given.
Maybe its the combination of it 'talking' in context to what was said, that makes people feel it is more 'alive' since it triggers the parts of the brain that makes us want to believe it is smarter, more sentient, capable of thought than it is?
To me its the same as the pattern misfiring of us finding/seeing faces in things that dont have them, just the audio and textual version of it.
The improvement of AI in just a couple of months is insane and this pace is only going up. The pace of technogic progress is exponential since modern times: this will only go faster.
The problem is that this is going so fast and will replace jobs in a pace that people can't adapt. And a lot of other weird stuff will happen we can't even imagine.
Not good, how so? Lots of things other in history we invented has replaced entire fields of jobs, and replaced people, and replaced older tech. Humanity was just fine.
At the rate its going now, unless we somehow miraculously accidently make AGI that we cannot turn off, disconnect or contain. Humanity will be fine. Because any of the AI we have now is surprisingly mundane and docile in its threat to us of its own accord. It literally can do nothing on its own, its just a pattern machine as it stands now, just a box that uses 4d matrix like a Scrabble game, thats good at knowing patterns of words to use, when fed other patterns of words.
No where in any of the AI we have is it a threat to us. US is making it a threat to us by training the dummy terminal to be better at patterns than we are. It taking over in a way that it does despite us trying to control it from doing it? On a scale of 1-10, its a 0.
It cant do anything on its own we dont make it, and then it can only do that very specific thing we made it do.
And what we make it do is limited to what we tell it to know, it cannot find new info on its own to learn from. Its not even remotely close to be structured in a way it can start 'doing its own thing'. The entire foundation, structure, and programming of it is WAY too limited to even dream it can.
One day? Maybe. But thats a loooong way off.
It being able to awkwardly talk to you or another of it in pattern finding (Which it get stuck on specifics that are reinforceable to itself, like how industrial looking the area is despite other questions in the example video).
Compared to being able to 'vastly take over jobs' like nurse, or care giver, or personal manager, CEOs, Coal miners, interactive and engaging school teachers with individual understanding of each chills needs and social interaction to encourage learning. Police officers that take each situation in hand to handle a car chase, vs a ticket, vs a mental health crisis, vs shoplifter..or whatever.
Or 1000's of other jobs that are more than 'vomit text in a sentence pattern in context to what words you was fed'.
Is A LONG way off from what it can do now. We are at the stages of jumping of buildings with paper wings. When to do 'all jobs' it needs to be the equal to space shuttle that can get to mars purely on nuclear power unmanned piloted with passengers on board.
Just the shock alone of people losing there profession will be huge.
And in no way there are enough jobs to fill up all the gaps.
This is not just a new technology, it's machines taking over human cognitive labour.
Perhaps it will work out, it could also be a disaster. It will be very rough though, society can't adapt that fast as technology is going atm. The world as we know it will end for with a bang and it will not be a pleasent ride.
edit:
back to gpt-4o:
At around to minutes the guy tells it's in fact just a new model, it combines different models in one.
And open-AI decided to name this gpt4-o, and not gpt-5 that is comming later this year. GPT-5 will be insane.
My personal experience with ChatGPT is that it works well in very common areas, let's say high-school level knowledge, whereas it becomes worse the more specific information needs to be. It becomes unreliable and especially, when you're not an expert yourself, it becomes untrustworthy. This will not be mitigated easily, because, as some video above states, there is simply not enough training data on certain subjects.
I think it will really bring a lot of streamlining and some jobs will be lost, but it's also overhyped because these companies sell a product.
I cringe, whenever I hear people say AGI.
"Enlightenment is man's emergence from his self-imposed nonage. Nonage is the inability to use one's own understanding without another's guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one's own mind without another's guidance. Dare to know! (Sapere aude.) "Have the courage to use your own understanding," is therefore the motto of the enlightenment."
It's not relevant what AI can do now, it's the speed things are improving. That's the scary stuff.
Sam Altman said recently he considers gpt-4 to be "stupid" and even says he is embarrassed about it. Sure this is also a marketing thing, but it's probably true to some extend. Wild stuff is allready there at open-ai office.
Edit: your answer considering level of intelect compared to humans is at around 12 minutes in the vid. GPT-4o is better in most fields then human experts.
i'll call it absolutely wonderful, because it is, until it gets scary, then i'll call it scary. if it ever gets there.
fear mongering is definitely a thing with AI, with new tech overall, always has been. mobile phones, microwaves, computers, samplers, synths.. the list goes on forever, neither of them killed us or harmed us, mostly they changed things for the better.
Last edited by vurt on Tue, 14th May 2024 20:22; edited 1 time in total
Edit: your answer considering level of intelect compared to humans is at around 12 minutes in the vid. GPT-4o is better in most fields then human experts.
So is medical books, wikipedia (usually), and library sections on that subject.
They are not a threat anymore than this is...why? Because all its doing is vomiting data it found from those places.
All its doing it rewording what it has from those references. It doesn't 'know' any of it (as in grasp or even know what 'the words' are). There is zero intellect in it. If so then an encyclopedia or medical book has intellect too.
Saying its better than most experts is like saying an encyclopedia is better at correct facts on something than the median student of it.
Last edited by |DXWarlock on Tue, 14th May 2024 20:29; edited 1 time in total
Edit: your answer considering level of intelect compared to humans is at around 12 minutes in the vid. GPT-4o is better in most fields then human experts.
So is medical books, wikipedia (usually), and library sections.
They are not a threat anymore than this is...why? Because all its doing is vomiting data it found from those places.
All its doing it rewording what it has from those references. It doesn't 'know' any of it (as in grasp or even know what 'the words' are). There is zero intellect in it. If so then an encyclopedia or medical book has intellect too.
Saying its better than most experts is like saying an encyclopedia is better at correct facts on something than the median student of it.
What we do as humans is not a lot different, at least for most fields. Most of the day jobs around don't require a lot of creativity.
last edit about the vid: at 20 mins an expert thinks this is indeed an early version of gpt-5.
The difference is a LOT different. We can infer, understand the context, the meaning, and use wisdom and insight to come to conclusions outside of the data given.
If you think being a doctor (or teacher, or scientist, or well anything that has a college degree) is simply memorizing a lot of things, with no need to apply the knowledge by truly grasping what it means to come to thoughts and ideas that expand on what you learned, for each different situation (not just know it) that is WAY off.
"Education is not the learning of many facts, but the training of the mind to think"
This cant do any of that. It cant 'think' (for lack of a better term) an original thought, or even grasp what it is typing.
I'm a doctor and it's simply learning a lot of things combined with some experience and some practical and social skills
Give a nurse a smartphone with gpt 6 or 7 with an ultrasound device and it can do 99% of my job better then i do.
I'm not gonna retire in my current field 100%. Probably end up a nurse or just retire early. Also depends on how the economy is evolving.
Not sure who does that, but if they do..stop doing it that way. If your doc is like that, find a new one. There is more than one doctor in your area
No, instead, I go to a personally vetted doctor that I trust and helps me that consults with the other doctors in the office if he is unsure. If still unsure, sends me to a specialist(s) until he/we knows for sure.
I dont blindly open google, search doctor, and throw a dart at the board and stick with them. Took me 5 years trying different doctors to find a Primary Care doc I wanted to keep.
My god, you guys need to shop around for doctors. There has to be a good one near you, right?
Your paying money for it, you wouldnt return to a restaurant that constantly serves you wrong food, or a flower shop that ignores what you ask/need and just gives you whatever they know how to make.
Why the hell would you visit a doctor a second time if he did that even once? Its literally your life you are spending your money on.
Like for my shoulder pains (one of the reasons wanted to start going to a doctor regularly). One of them just tossed me a prescription for some generic pain stuff and told me get back to him in 3 months. I told him "Nah man, keep it dont send it to be filled, I'm good. I'm going to another place, can cancel my 3 month appt also".
If ya not going to look into the actual cause of my problems, why am I paying you my money to?
No, instead you go to old doctors who're too prideful to admit they're unsure and just say whatever.
so true, I've also noticed most GP's just plug your symptoms into an app and prescribe whatever the app tells them to do.
So wait, the better choice instead of going to a doctor that just plugs stuff into an app, is to remove the doctor and just plug stuff into an app (GPT)? Isn't that using the part, that we complain we don't like they do?
So dont remove the app part, remove the doctor part It's complaining, they are doing, what was said could be the better option to do.
I swear if I ever see my doc open a "help a doc out" app to tell him what he needs to do, he is my ex-doctor I'm assuming he was a solid barely passing C+ college graduate, and move on from him.
I wouldnt hire a carpenter that is watching on YouTube how to build something while he is building it in my house. Why would I stick with a doctor that needs a tutorial app to tell him what to do during my visit??
If you go back to him, thats on you.
Signature/Avatar nuking: none (can be changed in your profile)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum