Try to ask it who is Winnie the Pooh (will start to generate a reply, but won't give you the answer eventually). Ask who is the best president, it will start talking about China and Xi, but eventually not generating answer
Try to ask it who is Winnie the Pooh (will start to generate a reply, but won't give you the answer eventually). Ask who is the best president, it will start talking about China and Xi, but eventually not generating answer
I tried this just now and it answered both perfectly, without mentioning China
Spoiler:
But if you ask it such things directly:
Last edited by Interinactive on Tue, 28th Jan 2025 23:59; edited 1 time in total
Ah improved then. My friend showed these simple queries in the morning to me, when we were out eating and for first it didn’t process it (while scribbling the answer), and for the other started talking about Xi until couldn’t process it
🇺🇸 OpenAI:
- 10 years old
- 4,500 employees
- raised $6.6 billion
🇨🇳 DeepSeek
- less than 2 years old
- 200 employees
- less than $10 million capital used
DeepSeek developed and open sourced an equal or better product and have it out for free, likely killing the business model of OpenAI.
Da hell is this
I think deepseek is part of a bigger AI program run coordinated by the Chinese state.
China is making huge AI improvements, like with Unitree, this also came out of nowhere:
And since deepseek is released open source and by accident is just as good as gpt o1 costing a 200$ a month subscription...
This has only be released to hinder western commercial AI devs. They probably have something better internally. And that isn't a project costing 6 million dollars.
AI development is going so fast. Every new model released is beating anything prior. And since people have figured out how to develop AI it doesn't seem that hard to do,
Open AI, Anthropic, Alphabet, Meta, Grok and now the Chinese. They are all competitive.
We are probably not talking about deepseek anymore soon, at least this version. Something else will be hot. Perhaps o3mini releasing this month.
The only news is that the Chinese have joined the race.
Insane tech though. Mount a gun or grenade on it with auto targeting, mass produce it for 20K a piece. Putin would sacrifice his own children for an army with these bots.
Also consider how many years Boston Dynamics is try harding with their bots. They sell a walking dog .
Insane tech though. Mount a gun or grenade on it with auto targeting, mass produce it for 20K a piece. Putin would sacrifice his own children for an army with these bots.
That price and those requirements don't go together. You don't just 'slap on a gun' and you got Terminator.
First, you need to make reliable auto-targeting and teach it how to use it in 1001 types of combat situations to tell friends from foe in the middle of battle and make sure its damn reliable at it.
We don't even have this now, if we did we would have auto turrets on mounts guarding bunkers and forward foxholes while people slept.
That in itself is sci-fi for now.
Also, teach it how to maneuver and take defensive/offensive action in a combat setting. How to analyse its surroundings, determine what is good reliable cover, how to take cover while being shot at and how to fire from that cover.
Not to mention logical reasoning of how to approach dynamic situations of enemies in cover and how to best breach it to get to them, and not just run right at them.
Right now if you slapped a gun on and it sent it to battle, whole packs of them would be mowed down as they ran (very well mind you) over open bumpy terrain directly where it was supposed to go with no attempt to save itself or even aware its in combat or being shot at, or figure out what best to do given its surroundings if it is.
it simply being able to keep its balancing on rough terrain while moving on two/four legs, is FAR from being 'human like battle ready bots'.
it simply being able to keep its balancing on rough terrain while moving on two/four legs, is FAR from being 'human like battle ready bots'.
Sure it's not a battle bot yet. Give it some time. Actually see this happening in the near future given the speed things develop. I think an auto targeting gun isn't even that far off from current capabilities.
If this thing hits the market it can allready be uses as a suicide bot. Dunno if this is worth the cost, but for sure this will be considered.
What? Why? Why pay $16k for a bot to blow up when loyal followers are free...
And if there are not any followers willing, large RC cars with cameras are cheap (and faster).
Hell cheap junk cars that run are a couple $1000, and can pack a LOT more boom-boom in them than the bot can carry. I cannot see any reason to pay that much for it, to do that, when dozens of other equally viable options exist that are far far cheaper.
And sure 'give it some time' can be applied to anything: Time travel isn't possible, but give it some time. We can crystal ball anything given undetermined lengths of time to allow it.
And Autotargeting is a ways off. We can't even do it remotely well for armed forces real-time headcounts on the field, much less determining who is friend, foe, and who is a threat vs not.
Have it point at and track A person? Sure.
Pick out people in a moving, hectic battlefield and figure out who is ally vs enemy, and their threat level to prioritize targets. Separate out non-combatant soldiers from combatant ones. Determine if any are civilians. As well as know how to avoid friendly fire when engaging enemies under cover? LOOONG way off.
You watch too much sci-fi I think. and oversimplify what 'targetting combatants' entails to do even remotely safely and low error/accident prone. All it needs to do is shoot one friendly, and every soldier will refuse to use it. As its a tool they cannot control and may shoot them too.
Last edited by SumZero on Wed, 29th Jan 2025 17:40; edited 1 time in total
Thank fucking God.. Or developers. They've removed the DEI shit on Chatgpt so now there is no warnings or lectures for every tiny thing. I knew competition was a good thing.
A newly proposed law could see people in the US face significant fines or even jail time for using the Chinese AI app DeepSeek.
The bill, filed by Republican Senator Josh Hawley, aims to “prohibit United States persons from advancing artificial intelligence capabilities within the People’s Republic of China, and for other persons”.
The proposed law would prevent the importation of “technology or intellectual property” developed in China, with anyone violating the restrictions facing up to 20 years in prison. Individuals could also be fined up to $1 million, while for businesses it could reach up to $100m.
Although it does not mention DeepSeek by name, it comes just a week after the Chinese chatbot became the most popular AI app in the US, causing US tech stocks to plummet.
"Enlightenment is man's emergence from his self-imposed nonage. Nonage is the inability to use one's own understanding without another's guidance. This nonage is self-imposed if its cause lies not in lack of understanding but in indecision and lack of courage to use one's own mind without another's guidance. Dare to know! (Sapere aude.) "Have the courage to use your own understanding," is therefore the motto of the enlightenment."
I could use the open AI o3-mini for free yesterday with the app. Insane, it's better then human intelligence. Also the standard 4o interaction seemed so much better then before, it model asked extra questions to specifiy the question before giving a final answer. Think Chatgpt is superior then googling now if you need to look for something.
This is the big moment for AI.
Better than human intelligence... The other day I couldn't get it to render basic form data for a HTML table without randomly skipping segments, no matter what model I used (paid account). I fed it PDFs (the models that would accept it), images, plain text etc. No dice.
It's definitely better than google, considering it can google multiple sources at once itself, but I often see gaps with it. It scares me what it could be used for when it still makes basic mistakes. Sometimes it completely shits itself and cannot answer questions at all. I will save an example next time that happens.
@friketje You keep connecting fluid of is transformations (The T in GPT) with human intelligence.
LLM and such are deterministic. That is a huge difference and why it is not intelligent like humans or 'thinks'. You keep forgetting that part. Algorithms are not intelligent, but they can trick us into thinking..they 'think' because they can produce things that appear 'non-linear' in deterministic ways.
Sort of like people attributing 'emotion' to those Useless Machines [the ones that turn a switch off when you turn it on] when someone makes one appear to have attitude. People see it act human, and extrapolate human traits of 'why' to it.
Your line of thinking includes traits it doesn't have: Pondering, rationalizing, and having its own thoughts to generate more. Since that's what humans do. And it emulates human output. But it does not use the same mechanism to get to that point.
Here is the basics of how it works. It's not even remotely close to human intelligence. It's not even..intelligence at all. It's statistical mathematical probably in a box. It's by its nature deterministic and cannot not be.
If curious how the transformers work another video. it's not intelligent at all.
I really wish they would not have called it "Artificial intelligence" People are now equating it to that. Same as how "UFO" mistakenly aliens now...when it really means 'unknown'.
@friketje You keep connecting fluid of is transformations (The T in GPT) with human intelligence.
LLM and such are deterministic. That is a huge difference and why it is not intelligent like humans or 'thinks'. You keep forgetting that part. Algorithms are not intelligent, but they can trick us into thinking..they 'think' because they can produce things that appear 'non-linear' in deterministic ways.
You are wrong here and this argument has been debunked by experts over and over.
Good example is this. Consider an AI doing image generation. You'll feed the AI pictures. And with reinforcements in the end you'll get good pictures. A monkey doing a trick?
No
This model had to learn loads of stuff from 2d pictures about our 3d world.
lightning, perspective, physics and also not unimportant generate a picture that makes sense for the human viewer.
The AI had to learn that all. Just by looking at 2d pictures.
The amount of abstract though required for this is insane and out of reach for humans.
Comparing: in painting humankind only figured out to do perspective in the 16th century. Just to do the opposite: te replicate a representation of our 3d world in a 2d picture. Should be easier, took centuries to achieve and it can only be done by the most talented people.
You are wrong here and this argument has been debunked by experts over and over.
Which ones? Because everyone I know who currently works on it, trains them, and is in the science of it says it's exactly this.
I train/refine tons of them locally, and this is exactly how it works when I do it..🤷♀️
friketje wrote:
Good example is this. Consider an AI doing image generation. You'll feed the AI pictures. And with reinforcements in the end you'll get good pictures. A monkey doing a trick?
That is exactly what it is doing. its billions of iterations of learning the values of pixels of billions of images and fed the words (to it abstract tokens) that relate to that image. And approximating best general replication of all the tokens you give it into one grid of pixels.
You just see the end result of those billions of 'iterate better/drop worse' training. A billion monkeys at a billion typewriters. And we let them slap at them until Shakespeare pops out, and we made it do that a billion times (training it) telling it "Keep the ones that randomly wrote Shakespeare and weed out the rest".
And you are equating that to Monkeys can write stageplays. When not one of those monkey's knew what they typed.
friketje wrote:
This model had to learn loads of stuff from 2d pictures about our 3d world.
lightning, perspective, physics and also not unimportant generate a picture that makes sense for the human viewer.
The AI had to learn that all. Just by looking at 2d pictures.
You are using 'learn' in the wrong context. It gets reinforced on a scale what 'pixel combinations' are favorable or not for that context. It has NO idea what lighting, perspective, and definitely not physics 'are'.
Your understanding of how GPT/LLM and Diffusion models work is WAY off.
For example, the word diffusion in StableDiffusion is a literal word, it's how it does it. And it is the opposite of intelligence.
You ask it for a "Green hallway perspective":
It makes random noise and adds/removes noise in 'steps' as it compares it to the 1024x1024 images data it was fed, tossing out random that makes it less like it, and keeping noise that comes out more like a facsimile the gated value of what 'green' 'hallway' and 'perspective' are in its matrix. It doesn't know what any of it is, or what you asked it. its just trying to make a grid of pixels that have a higher weight to similar images of the 3 terms. It has NO idea what the terms mean. It just goes by: these 500,000 different pixel grids (images) are what I use for noise bias when the letters in this order "HALLWAY" are prompted.
(Not a GREAT explanation, but close enough without getting into the weeds).
A great example:
Spoiler:
They was trying to train it to identify skin cancer in images (learn the pixel patterns in images of cancer). When they did that, it was adding pictures of tape measures and rulers as being cancer images. Why? Because all the images they feed it of cancer had a 'size scale' around the skin in question. So it learned the image pixel pattern of anything that looked like a ruler is an image of cancer. IT didn't know what cancer, rulers, or skin was. It just 'knew' these pixel grids highly correlate to what you said pixel grids you want should look like.
Think of it this way:
Spoiler:
I want to generate a single pixel on a screen. And make it green. I write a model that takes a random starting color pixel, and 'teach' it if:
"R or B go up drop that step, if R/B go down keep it, if G goes up keep it, if G goes down drop it"
And let it run. Watching the realtime pixel shift around colors until its 0,255,0 doesn't mean it 'knows' what green is. It's just following reinforcement rules.
Expand that out exponentially with 1024x1024 pixels, and tie word tokens instead of RGB values to the whole of all the pixel combinations..its SORT OF like that. Kinda
You are wrong here and this argument has been debunked by experts over and over.
Which ones? Because everyone I know who currently works on it, trains them, and is in the science of it says it's exactly this.
I train/refine tons of them locally, and this is exactly how it works when I do it..🤷♀️
Can't find the study about image generation atm, but it is often used by experts to explain how intelligent AI actually is.
In general people from the industry all say the same about AI, that it will surpass human intelligence fast and that it's gonna be a wild ride.
People claiming otherwise tend be not a part of the industry. It's also not conflict of interest. Hinton has quit his job at Alphabet and is a fulltime wistleblower now to talk about AI danger. Also a couple of people that have left open AI are doing this.
People that are downplaying AI capabilities in general are non experts.
Perhaps they believe in human exceptionalism, perhaps they just can't believe how wild things can go.
Maybe one day. But it will take a new foundation type of AI from the ground up. Our current type (no matter the form) is all deterministically reactive. it won't be the one to 'surpass human intelligence' as it has zero intelligence nor is it built or even structured to have the capacity to.
Either way, your understanding of how it works is way off. It's so far off that I don't know where to start to help you since it's all wrong.
Signature/Avatar nuking: none (can be changed in your profile)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum