frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
625•klaussilveira•12h ago•183 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
928•xnx•18h ago•547 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
34•helloplanets•4d ago•25 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
10•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
221•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
211•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
323•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
370•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
358•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
274•eljojo•15h ago•161 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•272 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
15•jesperordrup•3h ago•8 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
4•theblazehen•2d ago•0 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
13•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•189 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•17h ago•63 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
281•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
133•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

Ollama Turbo

https://ollama.com/turbo
430•amram_art•6mo ago

Comments

turnsout•6mo ago
Man, busy day in the world of AI announcements! This looks coordinated with OpenAI, as it launches with `gpt-oss-20b` and `gpt-oss-120b`
sambaumann•6mo ago
Yep, on the ollama home page (https://ollama.com/) it says

> OpenAI and Ollama partner to launch gpt-oss

hobofan•6mo ago
I do hope Ollama got a good paycheck from that, as they are essentially help OpenAI to oss-wash their image with the goodwill that Ollama has built up.
jasonjmcghee•6mo ago
Interested to see how this plays out - I feel like Ollama is synonymous with "local".
Aurornis•6mo ago
There's a small but vocal minority of users who don't trust big companies, but don't mind paying small companies for a similar service.

I'm also interested to see if that small minority of people are willing to pay for a service like this.

recursivegirth•6mo ago
Ollama, run by Facebook. Small company, huh.
mchiang•6mo ago
Ollama is not run by Facebook. We are a small team building our dreams.
criddell•6mo ago
I thought it was a Meta company because the name is so close to Llama which is a Meta product.

I looked up the Ollama trademark and was surprised to see it's a Canadian company.

josephwegner•6mo ago
Same, actually. I’m feeling much more pro-ollama suddenly!
jillesvangurp•6mo ago
The issue is not companies but governance. OSS licenses and companies are fine. Companies have a natural conflict of interest that can lead them to take software projects they control in a direction that suits their revenue goals but not necessarily the needs/wants of its users. That happens over and over again. It's their nature. This can means changes in direction/focus or worst case license changes that limit what you can do.

The solution is having proper governance for OSS projects that matter with independent organizations made up of developers, companies, and users taking care of the governance. A lot of projects that have that have last for decades and will likely survive for decades more.

And part of that solution is to also steer clear of projects without that. I've been burned a couple of times now getting stuck with OSS components where the license was changed and the companies behind it had their little IPOs and started serving share holders instead of users (elastic, redis, mongo, etc). I only briefly used Mongo and I got a whiff of where things were going and just cut loose from it. With Elastic the license shenenigans started shortly after their IPO and things have been very disruptive to the community (with half using Opensearch now). With Redis I planned the switch to Valkey the second it was announced. Clear cut case of cutting loose. Valkey looks like it has proper governance. Redis never had that.

Ollama seems relatively OK by this benchmark. The software (ollama server) is MIT licensed and there appears to be no contributor license agreement in place. But it's a small group of people that do most of the coding and they all work for the same vc funded company behind ollama. That's not proper governance. They could fail. They could relicense. They could decide that they don't like open source after all. Etc. Worth considering before you bet your company on making this a foundational piece of your tech stack.

threetonesun•6mo ago
I view it a bit like I do cloud gaming, 90% of the time I'm fine with local use, but sometimes it's just more cost effective to offload the cost of hardware to someone else. But it's not an all-or-nothing decision.
theshrike79•6mo ago
Yep, if you just want to play one or two games at 4k HDR etc. it's a lot cheaper to pay 22€ for GeForce Now Ultimate vs. getting a whole-ass gaming PC capable of the same.
moralestapia•6mo ago
Ollama is great but I feel like Georgi Gerganov deserves way more credit for llama.cpp.

He (almost) single-handedly brought LLMs to the masses.

With the latest news of some AI engineers' compensation reaching up to a billion dollars, feels a bit unfair that Georgi is not getting a much larger slice of the pie.

freedomben•6mo ago
Is Georgi landing any of those big-time money jobs? I could see a conflict-of-interest given his involvment with llama.cpp, but I would think he'd be well positioned for something like that
moralestapia•6mo ago
(This is mere speculation)

I think he's happy doing his own thing.

But then, if someone came in with a billion ... who wouldn't give it a thought?

webdevver•6mo ago
really a billion bucks is far too much, that is beyond the curve.

$50M, now thats just perfect. you're retired, nor burdened with a huge responsibility

apwell23•6mo ago
https://ggml.ai/

> ggml.ai is a company founded by Georgi Gerganov to support the development of ggml. Nat Friedman and Daniel Gross provided the pre-seed funding.

mrs6969•6mo ago
Agreed. Ollama itself is kind a wrapper around llamacpp anyway. Feel like the real guy is not included to the process.

Now I am going to go and write a wrapper around llamacpp, that is only open source, truly local.

How can I trust ollama to not to sell my data.

rafram•6mo ago
Ollama is not a wrapper around llama.cpp anymore, at least for multimodal models (not sure about others). They have their own engine: https://ollama.com/blog/multimodal-models
iphone_elegance•6mo ago
looks like the backend is ggml, am I missing something? same diff
Patrick_Devine•6mo ago
Ollama only uses llamacpp for running legacy models. gpt-oss runs entirely in the ollama engine.

You don't need to use Turbo mode; it's just there for people who don't have capable enough GPUs.

am17an•6mo ago
Seriously, people astroturfing this thread by saying ollama has a new engine. It literally is the same engine that llama.cpp uses and georgi and slaren maintain! VC funding will make people so dishonest and just plain grifters
guipsp•6mo ago
No one is astroturfing. You cannot run any model with just GGML. It's a tensor library. Yes, it adds value, but I don't think that saying that ollama also does is unfair.
benreesman•6mo ago
`ggerganov` is one of the most under-rated and under-appreciated hackers maybe ever. His name belongs next to like Carmack and other people who made a new thing happen on PCs. And don't forget the shout out to `TheBloke` who like single-handedly bootstrapped the GGUF ecosystem of useful model quants (I think he had a grant from pmarca or something like that, so props to that too).
extr•6mo ago
Nice release. Part of the problem right now with OSS models (at least for enterprise users) is the diversity of offerings in terms of:

- Speed

- Cost

- Reliability

- Feature Parity (eg: context caching)

- Performance (What quant level is being used...really?)

- Host region/data privacy guarantees

- LTS

And that's not even including the decision of what model you want to use!

Realistically if you want to use an OSS model instead of the big 3, you're faced with evalutating models/providers across all these axes, which can require a fair amount of expertise to discern. You may even have to write your own custom evaluations. Meanwhile Anthropic/OAI/Google "just work" and you get what it says on the tin, to the best of their ability. Even if they're more expensive (and they're not that much more expensive), you are basically paying for the priviledge of "we'll handle everything for you".

I think until providers start standardizing OSS offerings, we're going to continue to exist in this in-between world where OSS models theoretically are at performance parity with closed source, but in practice aren't really even in the running for serious large scale deployments.

coderatlarge•6mo ago
true but ignores handing over all your prompt traffic without any real legal protections as sama has pointed out:

[1] https://californiarecorder.com/sam-altman-requires-ai-privil...

supermatt•6mo ago
> OpenAI confirmed it has been preserving deleted and non permanent person chat logs since mid-Might 2025 in response to a federal court docket order

> The order, embedded under and issued on Might 13, 2025, by U.S. Justice of the Peace Decide Ona T. Wang

Is this some meme where “may” is being replaced with “might”, or some word substitution gone awry? I don’t get it.

kekebo•6mo ago
:)) Apparently. I don't have a better guess. Well spotted
wkat4242•6mo ago
Yeah noticed this too. Really weird for a professional publication
mattmaroon•6mo ago
Or May in another language?
davidron•6mo ago
Or non native English speaker who pronounces "may" the same as "might" and didn't realize the difference?

It is maybe not coincidental that "may" and "might" mean nearly the same thing which bolsters the case for auto correct gone awry.

beowulfey•6mo ago
auto correct gone awry
SickOfItAll•6mo ago
Clearly the author wrote the article with multiple uses of "may" and then used find/replace to change to "might" without proofreading.
I_am_tiberius•6mo ago
I wouldn't be surprised if those undeleted chats or some inferred data that is based on it is part of the gpt-5 training data. Somehow I don't trust this sama guy at all.
wkat4242•6mo ago
Gpt-oss comes only in 4.5 bit quant. This is the native model, so there's no fp16 original
satellite2•6mo ago
"All hardware is located in the United States."

If I use local/OSS models it's specifically to avoid running in a country with no data protection laws. It's a big close miss here.

bangaladore•6mo ago
I think what matters more here is "All hardware is located outside of China". Located in the US means little because that's not good enough for many regulated industries even within the US.

All things considered though, Europe is getting confusing. They have GDPR but now pushing to backdoor encryption within the EU? [1]

At least there isn't a strong movement in the US trying to outlaw E2E encryption.

[1] https://www.eff.org/deeplinks/2025/06/eus-encryption-roadmap...

Which brings up the point are truly private LLMs possible? Where the input I provide is only meaningful to me, but the LLM can still transform it without gaining any contextual value out of it? Without sharing a key? If this can be done, can it be done performantly?

blitzar•6mo ago
I would feel safer if the hardware was located in China than in the US.
wkat4242•6mo ago
Even the backdoor is an American lobby. Ashton Kutcher and Demi Moore's Thorn.
bangaladore•6mo ago
Maybe I hit a nerve with the EU part? I thought it was a fair observation, but I'm open to being corrected if there's more nuance I missed.
spookie•6mo ago
The bill has been stalled since 2022.

Yes, there is gonna be a new discussion for it on October 15, but I've already seen section of governments being against their own government position on the bill (Swedish Military for example).

riazrizvi•6mo ago
No I think the point is to choose the best jurisdiction to have cloud hosted data where your data is best protected from access by very wealthy entities via intelligence services bribery. That’s still hands down the USA.
pphysch•6mo ago
Any evidence for this claim that e.g. Mossad has less penetration into digital systems of USA than it does RF or PRC?
observationist•6mo ago
They might have access to any given machine, but they lack the broad scope of general surveillance. If they want to get you, just like most of the other nation state level threats, you will get got. For other threat models, the US works pretty well.

I guarantee that nobody cares about or will be surveilling your private AI use unless you're doing other things that warrant surveillance.

The reason big providers suck, as OpenAI is so nicely demonstrating for us, is that they retain everything, the user is the product, and court cases, other situations can unmask and expose everything you do on a platform to third parties. This country seriously needs a digital bill of rights.

riazrizvi•6mo ago
Nobody cares? That seems ludicrous to me. The last 3 decades of business have been characterized most of all by the increased access of private information on people for online business competitive insights. Sure if you are just a consumer you have nothing of real value except in the aggregate, but if you are an up-and-coming business drawing customers away from other businesses, your private AI use is absolutely of interest. Which is why serious businesses here scour the ToS.

The biggest game in town has been managing platforms that give owners an information advantage. But at least the world generally trusts the USA to abide by laws and user agreements, which is why, to my mind, the USA retains the near monopoly on information platforms.

I personally wouldn’t trust a UK platform for example, being a Brit native. The top echelon talent pool is so small and incestuous I don’t believe I would experience a fair playing field if a business of mine passed a certain size of national reach/importance.

EDIT: from ChatGPT, new money entrepreneurs with no inheritence/political ties by economic region, USA ~63%, UK/HongKong/Singapore ~45%, Emerging Markets ~35%, EU ~22%, Russia ~10%

impulser_•6mo ago
Then don't use it and keep using models locally?
computegabe•6mo ago
Why does everything AI-related have to be $20? Why can't there be tiers? OpenAI setting the standard of $20/m for every AI application is one of the worst things to ever happen.
thimabi•6mo ago
My guess is that’s the lowest price point that provides a modicum of profitability — LLMs are quite expensive to run, and even more so for providers like Ollama, which are entering the market and don’t have idle capacity.
furyofantares•6mo ago
Claude has $20, $100 and $200, ChatGPT $20, and $200, Google has $20 and $250. Those all have free tiers as well, and metered APIs. Grok has $30 and $300 it looks like, the list probably goes on and on.
colesantiago•6mo ago
Tokens are expensive and nobody is making any money.
senectus1•6mo ago
yep. this is the 2nd half of why the AI bubble is going to pop.
joecot•6mo ago
I strongly recommend together.ai, which allows you to use a lot of different open source models and charges for usage, not a monthly fee.
paxys•6mo ago
https://openai.com/chatgpt/pricing/ - $0 / $20 / $200 / $25 (team) / custom enterprise pricing / on-demand API pricing

https://www.anthropic.com/pricing - $0 / $17 (if billed annually) / $20 (if billed monthly) / $100 / $25 (team) / custom enterprise pricing / on-demand API pricing

Sounds like tiers to me.

computegabe•6mo ago
I should have specified less expensive tiers (below the $20 standard). A tier <= $10 would be great. Anything over $10 for casual use seems excessive (or at least from my perspective)
smlacy•6mo ago
Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

Thankfully, this may just leave more room for other open source local inference engines.

user-•6mo ago
I remember them pivoting from being infra.hq
smeeth•6mo ago
Their FOSS local inference service didn't go anywhere.

This isn't Anaconda, they didn't do a bait and switch to screw their core users. It isn't sinful for devs to try and earn a living.

blitzar•6mo ago
Yet. Their FOSS local inference service hasn't go anywhere ... yet.
kermatt•6mo ago
Another perspective:

If you earn a living using something someone else built, and expect them not to earn a living, your paycheck has a limited lifetime.

“Someone” in this context could be a person, a team, or a corporate entity. Free may be temporary.

dcreater•6mo ago
You can build this and go build something else as well. You don't need to morph the thing you built. That's underhanded
satvikpendem•6mo ago
> important and well designed open source project

It was always just a wrapper around the real well designed OSS, llama.cpp. Ollama even messes up the names of models by calling distilled models the name of the actual one, such as DeepSeek.

Ollama's engineers created Docker Desktop, and you can see how that turned out, so I don't have much faith in them to continue to stay open given what a rugpull Docker Desktop became.

Philpax•6mo ago
I wouldn't go as far as to say that llama.cpp is "well designed" (there be demons there), but I otherwise agree with the sentiment.
mchiang•6mo ago
we have always been building in the open, and so is Ollama. All the core pieces of Ollama are open. There are areas where we want to be opinionated on the design to build the world we want to see.

There are areas we will make money, and I wholly believe if we follow our conscious we can create something amazing for the world while making sure we can keep it fueled to keep it going for the long term.

Some of the ideas in Turbo mode (completely optional) is to serve the users who want a faster GPU, and adding in additional capabilities like web search. We loved the experience so much that we decided to give web search to non-paid users too. (Again, it's fully optional). Now to prevent abuse and make sure our costs don't go out of hand, we require login.

Can't we all just work together and create a better world? Or does it have to be so zero sum?

xiphias2•6mo ago
I wanted to try web search to increase my privacy but it wanted to do login.

For Turbo mode I understand the need for paying but the main poing of running a local model with web search is browsing from my computer without using any LLM provider. Also I want to get rid of the latency to US servers from Europe.

If ollama can't do it, maybe a fork.

mchiang•6mo ago
login does not mean payment. It is free to use. It costs us to perform the web search, so we want to make sure it is not subject to abuse.
dcreater•6mo ago
I'm sorry but your words don't match your actions.
shepardrtc•6mo ago
I think this offering is a perfectly reasonable option for them to make money. We all have bills to pay, and this isn't interfering with their open source project, so I don't see anything wrong with it.
Aeolun•6mo ago
> this isn't interfering with their open source project

Wait until it makes significant amounts of money. Suddenly the priorities will be different.

I don’t begrudge them wanting to make some money off it though.

shepardrtc•6mo ago
You may be right, but I hope you aren't!
otabdeveloper4•6mo ago
[flagged]
mchiang•6mo ago
sorry that you feel the way you feel. :(

I'm not sure which package we use that is triggering this. My guess is llama.cpp based on what I see on social? Ollama has long shifted to using our own engine. We do use llama.cpp for legacy and backwards compatibility. I want to be clear it's not a knock on the llama.cpp project either.

There are certain features we want to build into Ollama, and we want to be opinionated on the experience we want to build.

Have you supported our past gigs before? Why not be more happy and optimistic in seeing everyone build their dreams (success or not).

If you go build a project of your dreams, I'd be supportive of it too.

Maxious•6mo ago
> Have you supported our past gigs before?

Docker Desktop? One of the most memorable private equity rugpulls in developer tooling?

Fool me once shame on you, fool me twice shame on me

dangoodmanUT•6mo ago
Yes everyone should just write cpp to call local LLMs obviously
otabdeveloper4•6mo ago
Yes, but llama.cpp already comes with a ready-made OpenAI-compatible inference server.
reverius42•6mo ago
I think people are getting hung up on the "llama.cpp" name and thinking they need to write C++ code to use it.

llama.cpp isn't (just) a C++ library/codebase -- it's a CLI application, server application (llama-server), etc.

api•6mo ago
> Repackaging existing software while literally adding no useful functionality was always their gig.

Developers continue to be blind to usability and UI/UX. Ollama lets you just install it, just install models, and go. The only other thing really like that is LM-Studio.

It's not surprising that the people behind it are Docker people. Yes you can do everything Docker does with Linux kernel and shell commands, but do you want to?

Making software usable is often many orders of magnitude more work than making software work.

otabdeveloper4•6mo ago
> Ollama lets you just install it, just install models, and go.

So does the original llama.cpp. And you won't have to deal with mislabeled models and insane defaults out of the box.

lxgr•6mo ago
Can it easily run as a server process in the background? To me, not having to load the LLM into memory for every single interaction is a big win of Ollama.
otabdeveloper4•6mo ago
Yes, of course it can.
lxgr•6mo ago
I wouldn't consider that a given at all, but apparently there's indeed `llama-server` which looks promising!

Then the only thing that's missing seems to be a canonical way for clients to instantiate that, ideally in some OS-native way (systemd, launchcd etc.), and a canonical port that they can connect to.

llmtosser•6mo ago
This is not true.

No inference engine does all of:

- Model switching

- Unload after idle

- Dynamic layer offload to CPU to avoid OOM

ekianjo•6mo ago
this can be added to llama.cpp with llama.swap currently so even without Ollama you are not far off
dang•6mo ago
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html

dangoodmanUT•6mo ago
It was always a company
colesantiago•6mo ago
ollama is YC and VC backed, this was inevitable and not surprising.

All companies that raise outside investment follow this route.

No exceptions.

And yes this is how ollama will fall due to enshittification, for lack of a better word.

TuringNYC•6mo ago
>> Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad.

if i could have consistent and seamless local-cloud dev that would be a nice win. everyone has to write things 3x over these days depending on your garden of choice, even with langchain/llamaindex

mythz•6mo ago
Same, was just after a small lightweight solution where I can download, manage and run local models. Really not a fan of boarding the enshittification train ride with them.

Always had a bad feeling when they didn't give ggerganov/llama.cpp their deserved credit for making Ollama possible in the first place, if it were a true OSS project they would have, but now makes more sense through the lens of a VC-funded project looking to grab as much marketshare as possible to avoid raising awareness for alternatives in OSS projects they depend on.

Together with their new closed-source UI [1] it's time for me to switch back to llama.cpp's cli/server.

[1] https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas...

mark_l_watson•6mo ago
I don't blame them. As soon as they offer a few more models available with the Turbo mode I plan on subscribing to their Turbo plan for a couple of months - a buying them a coffee, or keeping the lights on kind of thing.

The Ollama app using the signed-in-only web search tool is really pretty good.

decide1000•6mo ago
It was fun because it was open. Now it's just another brand seeking dollars.
mchiang•6mo ago
Ollama at its core will always be open. Not all users have the computer to run models locally, and it is only fair if we provide GPUs that cost us money and let the users who optionally want it to pay for it.
ciaranmca•6mo ago
I think it’s the logical move to ensure Ollama can continue to fund development. I think you will probably end up having to add more tiers or some way for users to buy more credits/gpu time. See anthropic’s recent move with Claude code due to the usage of a number of 24/7 users.
thimabi•6mo ago
I’m not throwing the towel on Ollama yet. They do need dollars to operate, but still provide excellent software for running models locally and without paying them a dime.
recursivegirth•6mo ago
^ this. As a developer, Ollama has been my go-to for serving offline models. I then use cloudflare tunnels to make them available where I need them.
DiabloD3•6mo ago
Although it is open, its really just all code borrowed from llama.cpp.

If you want to see where the actual developers do the actual hard work, go use llama.cpp instead.

jnmandal•6mo ago
I see a lot of hate for ollama doing this kind of thing but also they remain one of the easiest to use solutions for developing and testing against a model locally.

Sure, llama.cpp is the real thing, ollama is a wrapper... I would never want to use something like ollama in a production setting. But if I want to quickly get someone less technical up to speed to develop an LLM-enabled system and run qwen or w/e locally, well then its pretty nice that they have a GUI and a .dmg to install.

mchiang•6mo ago
Thanks for the kind words.

Since the new multimodal engine, Ollama has moved off of llama.cpp as a wrapper. We do continue to use the GGML library, and ask hardware partners to help optimize it.

Ollama might look like a toy and what looks trivial to build. I can say, to keep its simplicity, we go through a deep amount of struggles to make it work with the experience we want.

Simplicity is often overlooked, but we want to build the world we want to see.

dcreater•6mo ago
But Ollama is a toy, it's meaningful for hobbyists and individuals to use locally like myself. Why would it be the right choice for anything more? AWS, vLLM, SGLang etc would be the solutions for enterprise

I knew a startup that deployed ollama on a customers premises and when I asked them why, they had absolutely no good reason. Likely they did it because it was easy. That's not the "easy to use" case you want to solve for.

jnmandal•6mo ago
Honestly, I think it just depends. A few hours ago I wrote I would never want it for a production setting but actually if I was standing something up myself and I could just download headless ollama and know it would work. Hey, that would also be fine most likely. Maybe later on I'd revisit it from a devops perspective, and refactor deployment methodology/stack, etc. Maybe I'd benchmark it and realize its fine actually. Sometimes you just need to make your whole system work.

We can obviously disagree with their priorities, their roadmap, the fact that the client isn't FOSS (I wish it was!), etc but no one can say that ollama doesn't work. It works. And like mchiang said above: its dead simple, on purpose.

dcreater•6mo ago
But its effectively equally easy to do the same with llama.cpp, vllm or modular..

(any differences are small enough that they either shouldn't cause the human much work or can very easily be delegated to AI)

evilduck•6mo ago
Llama.cpp is not really that easy unless you're supported by their prebuilt binaries. Go to the llama.cpp GitHub page and find a prebuilt CUDA enabled release for a Fedora based linux distro. Oh there isn't one you say? Welcome to losing an hour or more of your time.

Then you want to swap models on the fly. llama-swap you say? You now get to learn a new custom yaml based config file syntax that does basically nothing that the Ollama model file already does so that you can ultimately... have the same experience as Ollama but now you've lost hours just to get back to square one.

Then you need it to start and be ready with the system reboot? Great, now you get to write some systemd services, move stuff into system-level folders, create some groups and users and poof, there goes another hour of your time.

jnmandal•6mo ago
Sure but if my some of the development team is using ollama locally b/c it was super easy to install, maybe I don't want to worry about maintaining a separate build chain for my prod env. Many startups are just wrapping or enabling LLMs and just need a running server. Who are we to say what is right use of their time and effort?
mchiang•6mo ago
I can say trying many inference tools after the launch, many do not have the models implemented well, and especially OpenAI’s harmony.

Why does this matter? For this specific release, we benchmarked against OpenAI’s reference implementation to make sure Ollama is on par. We also spent a significant amount of time getting harmony implemented the way intended.

I know vLLM also worked hard to implement against the reference and have shared their benchmarks publicly.

buyucu•6mo ago
This kind of gaslighting is exactly why I stopped using Ollama.

GGML library is llama.cpp. They are one and the same.

Ollama made sense when llama.cpp was hard to use. Ollama does not have value preposition anymore.

mchiang•6mo ago
It’s a different repo. https://github.com/ggml-org/ggml

The models are implemented by Ollama https://github.com/ollama/ollama/tree/main/model/models

I can say as a fact, for the gpt-oss model, we also implemented our own MXFP4 kernel. Benchmarked against the reference implementations to make sure Ollama is on par. We implemented harmony and tested it. This should significantly impact tool calling capability.

Im not sure if im feeding here. We really love what we do, and I hope it shows in our product, in Ollama’s design and in our voice to our community.

You don’t have to like Ollama. That’s subjective to your taste. As a maintainer, I certainly hope to have you as a user one day. If we don’t meet your needs and you want to use an alternative project, that’s totally cool too. It’s the power of having a choice.

mark_l_watson•6mo ago
Hello, thanks for answering questions here.

Is there a schedule for adding additional models to Turbo mode plan, in addition to gpt-oss 20/120b? I wanted to try your $20/month Turbo plan, but I would like to be able to experiment with a few other large models.

buyucu•6mo ago
This is exactly what I mean by gaslighting.

GGML is llama.cpp. It it developed by the same people as llama.cpp and powers everything llama.cpp does. You must know that. The fact that you are ignoring it very dishonest.

scosman•6mo ago
> GGML library is llama.cpp. They are one and the same.

Nope…

leopoldj•6mo ago
> Ollama has moved off of llama.cpp as a wrapper. We do continue to use the GGML library

Where can I learn more about this? llama.cpp is an inference application built using the ggml library. Does this mean, Ollama now has it's own code for what llama.cpp does?

guipsp•6mo ago
https://github.com/ollama/ollama/tree/main/model/models
steren•6mo ago
> I would never want to use something like ollama in a production setting.

We benchmarked vLLM and Ollama on both startup time and tokens per seconds. Ollama comes at the top. We hope to be able to publish these results soon.

ekianjo•6mo ago
you need to benchmark against llama.cpp as well.
apitman•6mo ago
Did you test multi-user cases?
jasonjmcghee•6mo ago
Assuming this is equivalent to parallel sessions, I would hope so, this is like the entire point of vLLM
sbinnee•6mo ago
vllm and ollama assume different settings and hardware. Vllm backed by the paged attention expect a lot of requests from multiple users whereas ollama is usually for single user on a local machine.
miki123211•6mo ago
> I would never want to use something like ollama in a production setting

If you can't get access to "real" datacenter GPUs for any reason and essentially do desktop, clientside deploys, it's your best bet.

It's not a common scenario, but a desktop with a 4090 or two is all you can get in some organizations.

romperstomper•6mo ago
It is weird but when I tried new gpt-oss:b20 model locally llama.cpp just failed instantly for me. At the same time under ollama it worked (very slow but anyway). I didn't find how to deal with llama.cpp but ollama definitely doing something under the hood to make models work.
liuliu•6mo ago
Any more information on "Privacy first"? It seems pretty thin if just not retaining data.

For Draw Things provided "Cloud Compute", we don't retain any data too (everything is done in RAM per request). But that is still unsatisfactory personally. We will soon add "privacy pass" support, but still not to the satisfactory. Transparency log that can be attested on the hardware would be nice (since we run our open-source gRPCServerCLI too), but I just don't know where to start.

pagekicker•6mo ago
I see no privacy advantage to working with Ollama, which can sell your data or have it subpoenaed just like anyone else.
liuliu•6mo ago
In theory, "privacy pass" should help, as you can subpoena content, but cannot know who made these. But that is still thin (and Ollama not doing that too anyway).
pogue•6mo ago
I would pay more if they let you run the models in Switzerland or some other GDPR respecting country, even if there was extra latency. I would also hope everything is being sent over SSL or something similar.
seanmcdirmid•6mo ago
I had to do a double take here. Switzerland surely isn’t in the GDPR, so you mean their own privacy laws or GDPR in the EU?
jmort•6mo ago
I don't see a privacy policy and their desktop app is closed source. So, not encouraging.

[full disclosure I am working on something with actual privacy guarantees for LLM calls that does use a transparency log, etc.]

pbronez•6mo ago
I’d love to learn more about your project. I’m using socialized cloud regions for AI security and they really lag the mainstream. Definitely need more options here.

Edit: emailed the address on the site in your profile, got an inbox does not exist error.

colesantiago•6mo ago
No matter if a project is "open source" as long as they announce that they have raised millions amount of dollars from investors...

It is completely compromised, especially if it is an AI company.

How do you think ollama was able to provide the open source AI models to everyone for free?

I am pretty sure ollama was losing money on every pull of those images from their infrastructure.

Those that are now angry at ollama charging money or not focusing on privacy should have been angry when they raised money from investors.

llmtosser•6mo ago
Distractions like this probably the reason they still, over a year now, do not support sharded GGUF.

https://github.com/ollama/ollama/issues/5245

If any of the major inference engines - vLLM, Sglang, llama.cpp - incorporated api driven model switching, automatic model unload after idle and automatic CPU layer offloading to avoid OOM it would avoid the need for ollama.

jychang•6mo ago
That’s just llama-swap and llama.cpp
llmtosser•6mo ago
Interesting - it does indeed seem like llama-server has the needed endpoints to do the model swapping and llama.cpp as of recently also has a new flag for the dynamic CPU offload now.

However the approach to model swapping is not 'ollama compatible' which means all the OSS tools supporting 'ollama' Ex Openwebui, Openhands, Bolt.diy, n8n, flowise, browser-use etc.. aren't able to take advantage of this particularly useful capability as best I can tell.

jacekm•6mo ago
What could be the benefit of paying $20 to Ollama to run inferior models instead of paying the same amount of money to e.g. OpenAI for access to sota models?
vanillax•6mo ago
nothing lmao. this is just ollama trying to make money.
ibejoeb•6mo ago
I run a lot of mundane jobs that work fine with less capable models, so I can see the potential benefit. It all depends on the limits though.
AndroTux•6mo ago
Privacy, I guess. But at this point it’s just believing that they won’t log your data.
daft_pink•6mo ago
I feel the primary benefit of this Ollama Turbo is that you can quickly test and run different models in the cloud that you could run locally if you had the correct hardware.

This allows you to try out some open models and better assess if you could buy a dgx box or Mac Studio with a lot of unified memory and build out what you want to do locally without actually investing in very expensive hardware.

Certain applications require good privacy control and on-prem and local are something certain financial/medical/law developers want. This allows you to build something and test it on non-private data and then drop in real local hardware later in the process.

dawnerd•6mo ago
Quickly test… the two models they support? This is just another subscription to quantized models.
daft_pink•6mo ago
it looks like the plan is to support way more models though. gotta start somewhere.
fluidcruft•6mo ago
Me at home: $20/mo while I wait for a card that can run this or dgx box? Decisions, decisions.
jerieljan•6mo ago
> quickly test and run different models in the cloud that you could run locally if you had the correct hardware.

I feel like they're competing against Hugging Face or even Colaboratory then if this is the case.

And for cases that require strict privacy control, I don't think I'd run it on emergent models or if I really have to, I would prefer doing so on an existing cloud setup already that has the necessary trust / compliance barriers addressed. (does Ollama Turbo even have their Trust center up?)

I can see its potential once it gets rolling, since there's a lot of ollama installations out there.

rapind•6mo ago
I'm not sure the major models will remain at $20. Regardless, I support any and all efforts to keep the space crowded and competitive.
michelsedgh•6mo ago
I think its the data privacy is the main point and probably more usage before you hit limits? But mainly data privacy i guess
_--__--__•6mo ago
Groq seems to do okay with a similar service but I think their pricing is probably better.
Geezus_42•6mo ago
Yeah, the NAZI sex not will be great for business!
gabagool•6mo ago
You are thinking of Elon Grok, not Groq
janalsncm•6mo ago
When Grok originally came out I thought it was unlucky on Groq’s part. Now that Grok has certain connotations, it’s even more true.
owebmaster•6mo ago
"There's no such thing as bad publicity." PT Barnum
fredoliveira•6mo ago
Groq (the inference service) != Grok (xAI's model)
woadwarrior01•6mo ago
Groq's moat is speed, using their custom hardware.
adrr•6mo ago
Running models without a filter on it. OpenAI has an overzealous filter and won’t even tell you what you violated. So you have to do a dance with prompts to see if it’s copyright, trademark or whatever. Recently it just refused to answer my questions and said it wasn’t true that a civil servant would get fired for releasing a report per their job duties. Another dance sending it links to stories that it was true so it could answer my question. I want a LLMs without training wheels.
timmg•6mo ago
It says “usage-based pricing” is coming soon. I think that is the sweet spot for a service like this.

I pay $20 to Anthropic, so I don’t think I’d get enough use out of this for the $20 fee. But being able to spin up any of these models and use as needed (and compare) seems extremely useful to me.

I hope this works out well for the team.

ac29•6mo ago
> It says “usage-based pricing” is coming soon. I think that is the sweet spot for a service like this.

Agreed, though there are already several providers of these new OpenAI models available, so I'm not sure what ollama's value add is there (there are plenty of good chat/code/etc interfaces available if you are bringing your own API keys).

Aeolun•6mo ago
I mean $20/month for API access is definitely new.
wongarsu•6mo ago
A flat fee service for open-source LLMs is somewhat unique, even if I don't see myself paying for it.

Usage-based pricing would put them in competition with established services like deepinfra.com, novita.ai, and ultimately openrouter.ai. They would go in with more name-recognition, but the established competition is already very competitive on pricing

domatic1•6mo ago
Open router competition?
philip1209•6mo ago
Seems like an easy way to run gpt-oss for development environments on laptops. Probably necessary if you plan to self-host in production.
paxys•6mo ago
A subscription fee for API usage is definitely an interesting offering, though the actual value will depend on usage limits (which are kept hidden).
mchiang•6mo ago
we are learning the usage patterns to be able to price this more properly.
orliesaurus•6mo ago
Does anyone know if this is like like OpenRouter?
ivape•6mo ago
Often the math works out that you get a lot more for $20 a month if you settle for smaller sized but capable models (8b-30b). I don’t see how it’s better other than Ollama can “promise” they don’t store your data where as OpenRouter is dependent on which host you choose (and there’s no indicator on OpenRouter exposing which ones do or don’t).

In a universe where everything you say can be taken out of context, things like OpenAi will be a data leak nightmare.

Need this soon:

https://arxiv.org/abs/2410.02486

dcreater•6mo ago
Called it.

It's very unfortunate that the local inference community has aggregated around Ollama when it's clear that's not their long term priority or strategy.

Its imperative we move away ASAP

mchiang•6mo ago
hmm, how so? Ollama is open and the pricing is completely optional for users who want additional GPUs.

Is it bad to fairly charge money for selling GPUs that cost us money too, and use that money to grow the core open-source project?

At one point, it just has to be reasonable. I'd like to believe by having a conscientious, we can create something great.

tomrod•6mo ago
Everyone just wants to solarpunk this up.
dcreater•6mo ago
In an ideal world yes - as we should - especially for us Californian/Bay Area people, that's literally our spirit animal. But I understand that is idle dreaming. What I believe certainly is within reach is a state that is much better than what we are in.
tomrod•6mo ago
It needn't be idle dreaming? What fundamental law or societal agreement prevents solarpunk versus the current status quo of corporate anti-human cyberpunk?
dcreater•6mo ago
Being realistic about economics and how money works in the current paradigm where it is concentrated
dcreater•6mo ago
First, I must say I appreciate you taking the time to be engaged on this thread and responding to so many of us.

What I'm referring to is a broader pattern that I (and several) others have been seeing. Of the top of my head: not crediting llama.cpp previously, still not crediting llama.cpp now and saying you are using your own inference engine when you are still using ggml and the core of what Georgi made, most importantly why even create your own version - is it not better for the community to just contribute to llama.cpp?, making your own propreitary model storage platform disallowing using weights with other local engines requiring people to duplicate downloads and more.

I dont know how to regard these other than being largely motivated out of self interest.

I think what Jeff and you have built have been enormously helpful to us - Ollama is how I got started running models locally and have enjoyed using it for years now. For that, I think you guys should be paid millions. But what I fear is going to happen is you guys will go the way of the current dogma of capturing users (at least in mindshare) and then continually squeezing more. I would love to be wrong, but I am not going to stick around to find out as its risk I cannot take.

idiotsecant•6mo ago
Oh no this is a positively diabolical development, offering...hosting services tailored to a specific use case at a reasonable price ...
SV_BubbleTime•6mo ago
They can’t keep getting away with this.
mrcwinn•6mo ago
Yes, better to get free sh*t unsustainably. By the way, you're free to create an open source alternative and pour your time into that so we can all benefit. But when you don't — remember I called it!
rpdillon•6mo ago
What? The obvious move is to never have switched to Ollama and just use Llama.cpp directly, which I've been doing for years. Llama.cpp was created first, is the foundation for this product, and is actually open source.
wkat4242•6mo ago
But there's much less that works with that. OpenWebUI for example.
vntok•6mo ago
Open WebUI works perfectly fine with llama.cpp though.

They have very detailed quick start docs on it: https://docs.openwebui.com/getting-started/quick-start/start...

wkat4242•6mo ago
Oh thanks I didn't know that :O

I do also need an API server though. The one built into OpenWebUI is no good because it always reloads the model if you use it first from the web console and then run an API call using the same model (like literally the same model from the workspace). Very weird but I avoid it for that reason.

rpdillon•6mo ago
llama.cpp is what you want. It offers both a web UI and an API on the same port. I use llama.cpp's webui with gpt-oss-20b, and I also leverage it as an OpenAI-compatible server with gptel for Emacs. Very good product.
tarruda•6mo ago
Llama.cpp (library which ollama uses under the hoods) has its own server, and it is fully compatible with open-webui.

I moved away from ollama in favor of llama-server a couple of months ago and never missed anything, since I'm still using the same UI.

A4ET8a8uTh0_v2•6mo ago
Interesting, admittedly, I am slowly getting to the point, where ollama's defaults get a little restrictive. If the setup is not too onerous, I would not mind trying. Where did you start?
tarruda•6mo ago
Download llama-server from llama.cpp Github and install it some PATH directory. AFAIK they don't have an automated installer, so that can be intimidating to some people

Assuming you have llama-server installed, you can download + run a hugging face model with something like

    llama-server -hf ggml-org/gpt-oss-20b-GGUF -c 0 -fa --jinja

And access http://localhost:8080
mchiang•6mo ago
totally respect your choice, and it's a great project too. Of course as a maintainer of Ollama, my preference is to win you over with Ollama. If it doesn't meet your needs, it's okay. We are more energized than ever to keep improving Ollama. Hopefully one day we will win you back.

Ollama does not use llama.cpp anymore; we do still keep it and occasionally update it to remain compatible for older models for when we used it. The team is great, we just have features we want to build, and want to implement the models directly in Ollama. (We do use GGML and ask partners to help it. This is a project that also powers llama.cpp and is maintained by that same team)

tarruda•6mo ago
> Ollama does not use llama.cpp anymore

That is interesting, did Ollama develop its own proprietary inference engine or did you move to something else?

Any specific reason why you moved away from llama.cpp?

mchiang•6mo ago
it's all open, and specifically, the new models are implemented here: https://github.com/ollama/ollama/tree/main/model/models
kristjansson•6mo ago
> Ollama does not use llama.cpp anymore;

> We do use GGML

Sorry, but this is kind of hiding the ball. You don't use llama.cpp, you just ... use their core library that implements all the difficult bits, and carry a patchset on top of it?

Why do you have to start with the first statement at all? "we use the core library from llama.cpp/ggml and implement what we think is a better interface and UX. we hope you like it and find it useful."

mchiang•6mo ago
thanks, I'll take that feedback, but I do want to clarify that it's not from llama.cpp/ggml. It's from ggml-org/ggml. I supposed it's all interchangeable though, so thank you for it.
kristjansson•6mo ago

  % diff -ru ggml/src llama.cpp/ggml/src | grep -E '^(\+|\-) .*' | wc -l
      1445
i.e. as of time of writing +/- 1445 lines between the two, on about 175k total lines. a lot of which is the recent MXFP4 stuff.

Ollama is great software. It's integral to the broader diffusion of LLMs. You guys should be incredibly proud of it and the impact its had. I understand the current environment rewards bold claims, but the sense I get from some of your communications is "what's the boldest, strongest claim we can make that's still mostly technically true". As a potential user, taking those claims as true until closer evaluation reveals the discrepancy feels pretty bad, and keeps me firmly in the 'potential' camp.

Have the confidence in your software and the respect for your users to advertise your system as it is.

dcreater•6mo ago
This is utterly damming.
benreesman•6mo ago
I'm torn on this, I was a fan of the project from the very beginning and never sent any of my stuff upstream, so I'm less than a contributor but more than don't care, and it's still non-obvious how the split happened.

But the takeaway is pretty clearly that `llama.cpp`, `GGML`/`GGUF`, and generally `ggerganov`'s single-handedly Carmacking it when everyone thought it was impossible is all the value. I think a lot of people made Docker containers with `ggml`/`gguf` in them and one was like "we can make this a business if we realllllly push it".

Ollama as a hobby project or even a serious OSS project? With a cordial upstream relationship and massive attribution labels everywhere? Sure. Maybe even as a commercial thing that has a massive "Wouldn't Be Possible Without" page for it's OSS core upstream.

But like: startup company for making money that's (to all appearances) completely out of reach for the principles to ever do without totally `cp -r && git commit` repeatedly? It's complicated, a lot of stuff starts as a fork and goes off in a very different direction, and I got kinda nauseous and stopped paying attention at some point, but near as I can tell they're still just copying all the stuff they can't figure out how to do themselves on an ongoing basis without resolving the upstream drama?

It's like, in bounds barely I guess. I can't point to it being "this is strictly against the rules or norms", but it's bending everything to the absolute limit. It's not a zone I'd want to spend a lot of time in.

kristjansson•6mo ago
To be clear I was comparing ggml-org/ggml to ggml-org/llama.cpp/ggml to respond to the earlier thing. Ollama carries an additional patchset on top of ggml-org/ggml.

> [ggml] is all the value

That’s what gets me about Ollama - they have real value too! Docker is just the kernel’s cgroups/chroots/iptables/… but it deserves a lot of credit for articulating and operating those on behalf of the user. Ollama deserves the same. But they’re consistently kinda weird about owning just that?

cortesoft•6mo ago
Why are you being so accusatory about a choice about which details are important?
am17an•6mo ago
I’ve never seen a PR on ggml from Ollama folks though. Could you mention one contribution you did?
daft_pink•6mo ago
So I’m using turbo and just want to provide some feedback. I can’t figure out how to connect raycast and project goose to ollama turbo. The software that calls it essentially looks for the models via ollama but cannot find the turbo ones and the documentation is not clear yet. Just my two cents, the inference is very quick and I’m happy with the speed but not quite usable yet.
mchiang•6mo ago
so sorry about this. We are learning. Possible to email, and we will first make it right while we improve Ollama's turbo mode. hello@ollama.com
daft_pink•6mo ago
no worries. i totally understand that the first day something is released it doesn’t work perfectly with third party/community software.

thanks for the feedback address :)

om8•6mo ago
It’s unfortunate that llama.cpp’s code is a mess. It’s impossible to make any meaningful contributions to it.
kristjansson•6mo ago
I'm the first to admit I'm not a heavy C++ user, so I'm not a great judge of the quality looking at the code itself ... but ggml-org has 400 contributors on ggml, 1200 on llama.cpp and has kept pace with ~all major innovations in transformers over the last year and change. Clearly some people can and do make meaningful contributions.
halJordan•6mo ago
Fully compatible is a stretch, it's important we dont fall into a celebrity "my guy is perfect" trap. They implement a few endpoints.
jychang•6mo ago
They implement more openai-compatible endpoints than ollama at least
theshrike79•6mo ago
Isn't the open-webui maintainer heavily against MCP support and tool calling?
benreesman•6mo ago
I won't use `ollama` on principle. I use `llama-cli` and `llama-server` if I'm not linking `ggml`/`gguf` directly. It's like, two extra commands to use the one by the genius that wrote it and not the one that the guys just jacked it.

The models are on HuggingFace and downloading them is `uvx huggingface-cli`, the `GGUF` quants were `TheBloke` (with a grant from pmarca IIRC) for ages and now everyone does them (`unsloth` does a bunch of them).

Maybe I've got it twisted, but it seems to be that the people who actually do `ggml` aren't happy about it, and I've got their back on this.

janalsncm•6mo ago
Huggingface also offers a cloud product, but that doesn’t take away from downloading weights and running them locally.
sitkack•6mo ago
I believe that is what https://github.com/containers/ramalama set out to do.
cchance•6mo ago
I stopped using them when they started doing the weird model naming bullshit stuck with lmstudio since
Aurornis•6mo ago
> Its imperative we move away ASAP

Why? If the tool works then use it. They’re not forcing you to use the cloud.

dcreater•6mo ago
There are many, many FOSS apps that use Ollama as a dependency. If Ollama rugs, then all those projects suffer.

Its a tale we seen played out many times. Redis is the most recent example.

Hasnep•6mo ago
Most apps that integrate with ollama that I've seen just have an OpenAI compatible API parameter which defaults to port 11434 which ollama uses, but can be changed easily. Is there a way to integrate ollama more deeply?
dcreater•6mo ago
Yes, but I fear the average person will not understand that and assume you need Ollama. That false perception is sufficiently damaging im afraid
jcelerier•6mo ago
happy sglang user here :)
prettyblocks•6mo ago
Local inference is becoming completely commoditized imo. These days even docker has a local models you can launch with a single click (or command).
fud101•6mo ago
i was trying to remove it but noticed they've hidden the uninstall away. It amounts to doing a rm - which is a joke.
irthomasthomas•6mo ago
If these are FP4 like the other ollama models then I'm not very interested. If I'm using an API anyway I'd rather use the full weights.
mchiang•6mo ago
OpenAI has only provided MXFP4 weights. These are the same weights used by other cloud providers.
irthomasthomas•6mo ago
Oh, I didn't know that. Weird!
reissbaker•6mo ago
It was natively trained in FP4. Probably both to reduce VRAM usage at inference time (fits on a single H100), and to allow better utilization of B200s (which are especially fast for FP4).
irthomasthomas•6mo ago
Interesting, thanks. I didn't know you could even train at FP4 on H100s
reissbaker•6mo ago
It's impressive they got it to work — the lowest I'd heard of this far was native FP8 training.
captainregex•6mo ago
I am so so so confused as to why Ollama of all companies did this other than an emblematic stab at making money-perhaps to appease someone putting pressure on them to do so. Their stuff does a wonderful job of enabling local for those who want it. So many things to explore there but instead they stand up yet another cloud thing? Love Ollama and hope it stays awesome
janalsncm•6mo ago
The problem is that OSS is free to use but it is not free to create or maintain. If you want it to remain free to use and also up to date, Ollama will need someone to address issues on GitHub. Usually people want to be paid money for that.
captainregex•6mo ago
money is great! I like money! but if this is their version of buy me a coffee I think there’s room to run elsewhere for their skillset/area of expertise
mchiang•6mo ago
hmm, I don't think so. This is more of, we want to keep improving Ollama so we can have a great core.

For the users who want GPUs, which cost us money, we will charge money for it. Completely optional.

scosman•6mo ago
I build an app against the Ollama API. If this will let me test all Ollama models, I'm so in.
ahmedhawas123•6mo ago
So much that is interesting about this

For one of the top local open model inference engines of choice - only supporting OSS out of the gate feels like an angle to just ride the hype knowing OSS is announced today "oh OSS came out and you can use Ollama Turbo to use it"

The subscription based pricing is really interesting. Other players offer this but not for API type services. I always imagine that there will be a real pricing war with LLMs with time / as capabilities mature, and going monthly pricing on API services is possibly a symptom of that

What does this mean for the local inference engine? Does Ollama have enough resources to maintain both?

Havoc•6mo ago
That'll be an uphill battle on value proposition tbh. $20 a month for access to a widely available MoE 120B with ~5B active parameters at unspecified usage limits?

I guess their target audience values convenience and easy of use above all else so that could play well there maybe.

selcuka•6mo ago
> Turbo includes hourly and daily limits to avoid capacity issues. Usage-based pricing will soon be available to consume models in a metered fashion.

Doesn't look that much better than a ChatGPT Plus subscription.

cchance•6mo ago
20$ ... for the openai opensource models in preview only?
radioradioradio•6mo ago
Looks like Docker's "offload" product, but with less functionality and more vendor lock-in, the simple pricing both excites and worries me.
agnishom•6mo ago
> What is Turbo?

> Turbo is a new way to run open models using datacenter-grade hardware.

What? Why not just say that it is a cloud-based service for running models? Why this language?

owebmaster•6mo ago
Why use meaningful words in place of allegories like clouds, you ask?
fud101•6mo ago
No thanks, Ollama. I'd rather give the money to anyone but you grifters.
yahoozoo•6mo ago
Daily limits yawn
rohansood15•6mo ago
The 'Sign In' link on the Ollama Mac App when you click Turbo doesn't work...
jmorgan•6mo ago
It should open ollama.com/connect – sorry about that. Feel free to message me jeff@ollama.com if you keep seeing issues
_giorgio_•6mo ago
Can anyone explain why this is a bad thing?

Is it because they developed s new ollama which isn't open and which doesn't use llama.cpp?

ochronus•6mo ago
Ah, vague "limits". Hard pass.
hanifbbz•6mo ago
I like how the landing page (and even this HN page until this point) completely miss any reference to Meta and Facebook. The landing page promises privacy but anyone who knows how FB used VPN software to spy on people, knows that as long as the current leadership is in place, we shouldn't assume they've all of a sudden became fans of our privacy.
tuckerman•6mo ago
Ollama isn’t connected to Meta besides offering Llama as one of the potential models you can run.

There is obviously some connection to Llama (the original models giving rise to llama.cpp which Ollama was built on) but the companies have no affiliation.

santa_boy•6mo ago
Is there an evaluation of such services available anywhere. Looking for recommendations for similar services with usage based pricing and pro-and-cons.

ps: looking for most economic one to play around with as long as it a decent enough experience (minimal learning curve). buy, happy to pay too

splittydev•6mo ago
OpenRouter is great. Less privacy I guess, but you pay for usage and you have access to hundreds of models. They have free models too, albeit rate-limited.
buyucu•6mo ago
More than one year in and Ollama still doesn't support Vulkan inference. Vulkan is essential for consumer hardware. Ollama is a failed project at this point: https://news.ycombinator.com/item?id=42886680
zozbot234•6mo ago
There's an open pull request https://github.com/ollama/ollama/pull/9650 but it needs to be forward ported/rebased to the current version before the maintainers can even consider merging it.

Also realistically, Vulkan Compute support mostly helps iGPU's and older/lower-end dGPU's, which can only bring a modest performance speed up in the compute-bound preprocessing phase (because modern CPU inference wins in the text-generation phase due to better memory bandwidth). There are exceptions such as modern Intel dGPU's or perhaps Macs running Asahi where Vulkan Compute can be more broadly useful, but these are also quite rare.

buyucu•6mo ago
That pull request has been open for more than a year. The owner rebased multiple times but eventually gave up because Ollama devs just don't care.
zozbot234•6mo ago
That's not a helpful point of view. It's the contributors' job to keep a pull request up to date as the codebase evolves, a maintainer is under no obligation to accept a PR that has long become out of date and unmergeable.
buyucu•6mo ago
The PR was in good shape. Ollama devs ignored it, and the original author rebased it multiple times. Since Ollama devs don't care, he just gave up after a while.

Ollama is in a very sad state. The project is dysfunctional.

jp1016•6mo ago
at this point, can i purchase the subscription directly from the model provider or hugging face and use it? or is this ollama attempt to become a provider like them.
factorialboy•6mo ago
In case the website isn't clear, this seems to be a paid-hosted service for models.
zacian•6mo ago
Does this mean we can access Ollama APIs for $20/mo and test them without running the model locally? I'm not hardware-rich, but for some projects, I'd like a reliable pricing.
st3fan•6mo ago
Does anyone know who or what ollama is in terms of people and company?
leopoldj•6mo ago
For production use of open weight models I'd use something like Amazon Bedrock, Google Vertex AI (which uses vLLM), or on-prem vLLM/SGLang. But for a quick assessment of a model as a developer, Ollama Turbo looks appealing. I find Google GCP incredibly user hostile and a nightmare to navigate quotas and stuff.
aglazer•6mo ago
This is super exciting. Congratulations on the launch!