frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
487•klaussilveira•7h ago•130 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
828•xnx•13h ago•495 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
48•matheusalmeida•1d ago•5 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
163•isitcontent•8h ago•18 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
104•jnord•4d ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
159•dmpetrov•8h ago•74 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
57•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
267•vecti•10h ago•127 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
334•aktau•14h ago•161 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
216•eljojo•10h ago•136 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•87 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
31•kmm•4d ago•1 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
418•todsacerdoti•15h ago•220 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
9•denuoweb•1d ago•0 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
8•romes•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
349•lstoll•14h ago•245 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
55•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
205•i5heu•10h ago•150 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
117•vmatsiiako•12h ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
155•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
30•gfortaine•5h ago•4 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
12•gmays•3h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
254•surprisetalk•3d ago•32 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1008•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
50•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
83•ray__•4h ago•40 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
41•lebovic•1d ago•12 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•15h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments
Open in hackernews

Calibre adds AI "discussion" feature

https://lwn.net/Articles/1049886/
27•pykello•1mo ago

Comments

gardenerik•1mo ago
I struggle to understand the pushback against AI features. As long as the feature isn't intrusive, it seems like a minor addition, and may even be useful to some people. LLMs are here to stay, there is no denying that at this point.
rkomorn•1mo ago
I guess it depends on your definition of "intrusive".

I have no interest in any of the AI features that have been added to the UIs of Meta products (WhatsApp, and Messenger), yet still see prompts for them and modified UIs to try and get me to engage with Meta AI.

Same goes with Gemini poking its head into various spots in the UIs of the Google products I use.

There are now UI spots I can accidentally tap/click and get dropped into a chat with an AI in various things I use on a daily basis.

There are also more "calls to action" for AI features, more "hey do you wanna try AI here?" prompts, etc.

It's not just the addition of AI features, it's all the modern, transparent desperation-for-metrics-to-go-up UX bits that come with it.

And yes, some of these things were around before this wave of AI launches, but a- that doesn't make it better, and b- all the AI features are seemingly the same across apps, so now we have bunches of apps all pushing the same "feature" at us.

gardenerik•1mo ago
I agree with you that the push towards them is annoying. (Google's "Your phone has new exciting features.")

In this case, Calibre does not seem to introduce any said annoyances (probably because it is FOSS, so no pressure for adoption), but people are upset anyways.

There are many features I don't use in various software, but it never made me complain that a new icon/menu entry appeared.

rkomorn•1mo ago
I think there are "classes" of features people have disliked. Eg: every social media app added "stories" at some point, using up screen real estate. Same goes with "shorts/reels/etc".

It's one thing when a feature gets added to an app.

It's another thing when it happens in a context where every app is doing it (or something similar), and you see it in every facet of your tech life.

netsharc•1mo ago
WhatsApp has now told me twice about Lisa Blackpink.. I wanted to write my friend Lisa, and I talk to her on Instagram and I don't have her on WhatsApp. So searching for her on WhatsApp gives me 2 unrelated contacts, and then Meta Ducking AI suggestions, of which the top one is Lisa Blackpink. Then, further down the screen (hidden by the keyboard) I can see chats where I've mentioned her to mutual friends, but fucking nooo, it's more important that Fuckerberg shoves AI down our throats.

WhatsApp should release their most searched terms on AI, I bet it would correlate with most common names among WhatsApp users...

mold_aid•1mo ago
>LLMs are here to stay, there is no denying that at this point.

You make LLMs sound like a stalker, or your mom's abusive live-in boyfriend

afavour•1mo ago
My problem is that all AI features are currently wildly underpriced by tech giants who are providing subsidies in the hopes of us becoming reliant upon them. Not to mention it means we’re feeding all kinds of our own behavioural data to these giants with very little visibility.

Any new feature should face a very simple cost/benefit analysis. The average user currently can’t do that with AI. I think AI in some form is inevitable. But what we see today (hey, here’s a completely free feature we added!) is unsustainable both economically and environmentally.

adastra22•1mo ago
Actually the frontier lab pricing is way more expensive than actual cost. Look up the prices for e.g. Kiki K2 on open router to see the real “unsubsidized” costs. It can be up to an order of magnitude less.
TechSquidTV•1mo ago
Summarizing text can very easily be done by local AI. Low powered, and free. for this type of task, there is essentially no reason to pay.
afavour•1mo ago
Which is not what is happening here. I think a lot of people’s objections would be resolved by a local model.
gardenerik•1mo ago
> Currently, calibre users have a choice of commercial providers, or running models locally using LM Studio or Ollama.

The choice is yours. If you want local models, you can do that.

troyvit•1mo ago
This is totally true and points to why Calibre's feature adds value. However I think the big players see exactly what you see and are scrambling to become peoples' go-to first. I believe this is for two main reasons. The first is because it's the only way they know how to win, and they don't see any option other than winning. The second is that they want the data that comes with it so they can monetize it. People switching to local models has a chance to take all that away, so cloud providers are doing everything they can to make their models easier to use and more integrated.
nottorp•1mo ago
Someone wasted their time to add this feature when I can just throw some book titles at any of the other chatbots instead.

Perhaps to the detriment of some compatibility features that got sent to the back burner.

throwawa14223•1mo ago
Why is it obvious they are here to stay?
mcphage•1mo ago
If they added a menu item “Kick a puppy”, and every time you clicked it, a puppy somewhere got kicked, would your response be “oh, well, I don’t like kicking puppies, so I just won’t click it, no big deal”?
jlarocco•1mo ago
People are annoyed because the rollout of AI has been very intrusive. It's being added to everything even when it doesn't make sense. It's this generations version of having an app for every website. Does Calibre really need its own AI chatbox when I can ask the same question to ChatGPT in a browser?
squigz•1mo ago
Do you really need AI integration in your IDE when you can just use the ChatGPT chatbox in your browser?

Having it built-in allows Calibre to add context to the prompt automagically.

danielscrubs•1mo ago
For my part it is the uneasy feeling of maybe being tracked and “sharing” private text/media either by accident or by the software’s malice.

Most devs want to put AI on their CV so they have strong personal incentives to circumvent what is best for their users.

Would you like to have LLM connections to Google from your OSS torrent client?

We can see the painting on the wall, but still not like it.

on_the_train•1mo ago
People don't want it, plain and simple. Yet again here, like a thousand times before: it still gets forced on users. I don't know who's orchestrating this madness, but it's pathetic
DetectDefect•1mo ago
> I struggle to understand the pushback against AI features.

To develop and sustain these "AI features", human intelligence - manifested as countless hours of work published online and elsewhere - was exigently preempted and used without permission to further increase the asymmetry of knowledge/power between those with political power and those without (mostly vulnerable, marginalized cohorts).

gmuslera•1mo ago
I think it adds value. Having a conversation with/around a book/document with an AI is a good use case, and having that feature as a not forced option in a book management solution a good match.

It is not something that works regardless if we configure or activate it or not. It may broaden the AI use for people that find that useful? Yes. Would that end being dependency on a particular provider? Maybe on how we use it. At some point a lot of those decisions were taken in the past by most of the rest, like using search engines or a narrow/builtin set of browsers or desktop/mobile OSs. If using AIs is a concern then the ship has sailed long ago for many bigger things already.

Arodex•1mo ago
[flagged]
VertanaNinjai•1mo ago
If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation? How could it “convince” you of anything or cast something in a bad light without conversing?

Your point about censorship, however, I fully agree with.

janice1999•1mo ago
> If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation?

Humans are more than biased word predictors.

halJordan•1mo ago
That has nothing to do with the guy who said stop anthropomorphizing llms and then proceeded to anthropomorphize an llm.
gmuslera•1mo ago
I'm interacting with a language model, using language and normal phrases. That is basically a conversation from my point of view, as is mostly indistinguishable from saying the same and getting similar answers that I could get from a real person that had read that book. No matter what is in the other side, because we are talking about the exchanged text and not the participants or what possibly they could had in their minds or chips.
e-khadem•1mo ago
Safety is a valid concern in general. But avoidance not the right way to approach it. Democratizing the access to such tools (and developing a somewhat open ecosystem around it) for researchers and the general public is the better way IMO. This way people with more knowledge (not necessarily technical. For example philosophers) can experiment and explore this space more and guide the development going forward.

Also, the base assumption of every prospering society is a population that cares about and values their freedom and rights. If the society drifts towards becoming averse to learning about these virtues ... well, there will be consequences (and yes, we are going this way. For example look at the current state of politics, wealth distribution, and labor rights in the US. People would have been a lot more resentful to this in the 1960s or 70s.)

The same is true about AI systems. If the general public (or at least a good percentage of the researchers) study it well enough, they will force this alignment with true human values. Contrary to this, censorship or less equitable / harder access and later evaluation is really detrimental to this process (more sophisticated and hazardous models will be developed without any feedback from the intellectuals / the society. Then those misaligned models can cause a lot of harm in the hands of a rogue actor).

emodendroket•1mo ago
> Will it report me if I try to discuss "The anarchist's cookbook" with it?

I don’t know. Weren’t you already running that risk with “download metadata”?

resfirestar•1mo ago
The addition doesn't really bother me because Calibre is already full of features that seem utterly useless, so I trust its author to add new stuff without ruining the parts that are useful to me. Still, does anyone actually use "ChatGPT bolted onto the ebook reader" type features for anything besides cheating on school assignments? Lack of web search tools makes them suboptimal for asking clarifying questions or getting recommendations. Makes some sense on a Kindle where you can't exactly alt-tab to ask ChatGPT directly, not so much in a desktop application.

Not to say that there's no use case (I'd be interested to try a LLM-aided notetaking tool), just that adding a chat box is hardly a feature.

another_twist•1mo ago
Its useful for answering questions related related to back references. eg who was this character if you're reading a novel with too many of these.
resfirestar•1mo ago
Oh true, that actually sounds useful. I often just avoid that kind of novel because I'm terrible with names.
squigz•1mo ago
> After much pushback, it looks as though users will get the ability to hide the feature from calibre's user interface, but LLM-driven features are here to stay and more will likely be added over time.

With the whole "no local models?! mega corp censorship!" complaint sidestepped from day 1, and now that it's not even shown on the UI, what will AI opponents complain about?!