American AI companies have shown they are money and compute eaters, and massively so at that. Billions later, and well, not much to show.
But Deepseek cost $5M to develop, and made multiple novel ways to train.
Oh, and their models and code are all FLOSS. The US companies are closed. Basically, the US ai companies are too busy treating each other as vultures.
https://interestingengineering.com/culture/deepseeks-ai-trai...
That's not accurate. The Gemini family of models are all proprietary.
Google's Gemma models (which are some of the best available local models) are open weights but not technically OSI-compatible open source - they come with usage restrictions: https://ai.google.dev/gemma/terms
A yea the Gemma series is incredible and while maybe not meeting the standards of OSI - I consider them to be pretty open as far as local models go. And it’s not just the standard Gemma variants, Google is releasing other incredible Gemma models that I don’t think people have really even caught wind of yet like MedGemma, of which the 4b variant has vision capability.
I really enjoy their contributions to the open source AI community and think it’s pretty substantial.
This is obviously false, I'm curious why you included it.
> Oh, and their models and code are all FLOSS.
No?
I think the sweet spot for local models may be around the 20B size - that's Mistral Small 3.x and some of the Gemma 3 models. They're very capable and run in less than 32GB of RAM.
I really hope OpenAI put one out in that weight class, personally.
the bot has no agency, the bot isn't doing anything, people talk to themselves, augmenting their chain of thought with an automated process. If the automated process is acting in an undesirable manner, the human that started the process can close the tab.
Which part of this is dangerous or harmful?
Most companies, for better or worse (I say for better) don’t want their new chatbot to be a RoboHitler, for example.
That said, I am happy to accept the term safety used in other places, but here it just seems like a marketing term. From my recollection, OpenAI had made a push to get regulation that would stifle competition by talking about these things as dangerous and needing safety. Then they backtracked somewhat when they found the proposed regulations would restrict themselves rather than just their competitors. However, they are still pushing this safety narrative that was never really appropriate. They have a term for this called alignment and what they are doing are tests to verify alignment in areas that they deem sensitive so that they have a rough idea to what extent the outputs might contain things that they do not like in those areas.
That's not even considering tool use!
If you hook up a chat bot to a chat interface, or add tool use, it is probable that it will eventually output something that it should not and that output will cause a problem. Preventing that is an unsolved problem, just as preventing people from abusing computers is an unsolved problem.
(1) Execute yes (with or without arguments, whatever you desire).
(2) Let the program run as long as you desire.
(3) When you stop desiring the program to spit out your argument,
(4) Stop the program.
Between (3) and (4) some time must pass. During this time the program is behaving in an undesired way. Ergo, yes is not a counter example of the GP's claim.
That said, I suspect the other person was actually agreeing with me, and tried to state that software incorporating LLMs would eventually malfunction by stating that this is true for all software. The yes program was an obvious counter example. It is almost certain that all LLMs will eventually generate some output that is undesired given that it is determining the next token to output based on probabilities. I say almost only because I do not know how to prove the conjecture. There is also some ambiguity in what is a LLM, as the first L means large and nobody has made a precise definition of what is large. If you look at literature from several years ago, you will find people saying 100 million parameters is large, while some people these days will refuse to use the term LLM to describe a model of that size.
AI 'safety' is one of the most neurotic twitter-era nanny bullshit things in existence, blatantly obviously invented to regulate small competitors out of existence.
AI safety is about proactive safety. Such an example: if an AI model could be used to screen hiring applications, making sure it doesn’t have any weighted racial biases.
The difference here is that it’s not reactive. Reading a book with a racial bias would be the inverse; where you would be reacting to that information.
That’s the basis of proper AI safety in a nutshell
That should really be done for humans reviewing the resumes as well, but in practice that isn't done as much as it should be
And the safety testing actually makes this worse, because it leads people to trust that LLMs are less likely to give dangerous advice, when they could still do so.
It is. It is also part of Sam Altman’s whole thing about being the guy capable of harnessing the theurgical magicks of his chat bot without shattering the earth. He periodically goes on Twitter or a podcast or whatever and reminds everybody that he will yet again single-handedly save mankind. Dude acts like he’s Buffy the Vampire Slayer
https://moonshotai.github.io/Kimi-K2/
OpenAI know they need to raise the bar with their release. It can't be a middle-of-the-pack open weights model.
stonogo•3h ago