frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•21s ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•41s ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•3m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•4m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•5m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
2•CurtHagenlocher•7m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•8m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•9m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•9m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•10m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•11m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•14m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•18m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•19m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•20m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•23m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•25m ago•1 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•26m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•33m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•35m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•39m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
10•mooreds•40m ago•3 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•41m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

2•pinkmuffinere•42m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•47m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•49m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
3•saikatsg•49m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
2•aweussom•49m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
4•archb•51m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•52m ago•0 comments
Open in hackernews

Tell HN: We've come a long way since GPT-3 days

3•behnamoh•9mo ago
I remember the old days when the only open-weight model out there was BLOOM, a 176B parameter model WITHOUT QUANTIZATION that wasn't comparable to GPT-3 but still gave us hope that the future would be bright!

I remember when the local AI community was just a few thousand enthusiasts who were curious about these new language models. We used to sit aside and watch OpenAI make strides with their giant models, and our wish was to bring at least some of that power to our measly small machines, locally.

Then Meta's Llama-1 leak happened and it opened the pandora's box of AI. Was it better than GPT-3.5? Not really, but it kick started the push to making small capable models. Llama.cpp was a turning point. People figured out how to run LLMs on CPU.

Then the community came up with GGML quants (later renamed to GGUF), making models even more accessible to the masses. Several companies joined the race to AGI: Mistral with their mistral-7b and mixtral models really brought more performance to small models and opened our eyes to the power of MoE.

Many models and finetunes kept popping up. TheBloke was tirelessly providing all the quants of these models. Then one day he/she went silent and we never heard from them again (hope they're ok).

You could tell this was mostly an enthusiasts hobby by looking at the names of projects! The one that was really out there was "oobabooga" The thing was actually called "Text Generation Web UI" but everyone kept calling it ooba or oobabooga (that's its creator's username).

Then came the greed... Companies figured out there was potential in this, so they worked on new language models for their own bottom-line reasons, but it didn't matter to us since we kept getting good models for free (although sometimes the licenses were restrictive and we ignored those models).

When we found out about LoRA and QLoRA, it was a game changer. So many people finetuned models for various purposes. I kept asking: do you guys really use it for role-playing? And turns out yes, many people liked the idea of talking to various AI personas. Soon people figured out how to bypass guardrails by prompt injection attacks or other techniques.

Now, 3 years later, we have tens of open-weight models. I say open-WEIGHT because I think I only saw one or two truly open-SOURCE models. I saw many open source tools developed for and around these models, so many wrappers, so many apps. Most are abandoned now. I wonder if their developers realized they were in high demand and could get paid for their hard work if they didn't just release everything out in the open.

I remember the GPT-4 era: a lot of papers and models started to appear on my feed. It was so overwhelming that I started to think: "is this was singularity feels like?" I know we're nowhere near singularity, but the pace of advancements in this field and the need to keep yourself updated at all times has truly been amazing! OpenAI used to say they didn't open-source GPT-3 because it was "too dangerous" for the society. We now have way more capable open-weight models that make GPT-3 look like a toy, and guess what, no harm happened to the society, business as usual.

A question we kept getting was: "can this 70B model run on my 3090?" Clearly, the appeal of running these LLMs locally was great, as can be seen by looking at the GPU prices. I remain hopeful that Nvidia's monopoly will collapse and we'll get more competitive prices and products from AMD, Intel, Apple, etc.

I appreciate everyone who taught me something new about LLMs and everything related to them. It's been a journey.