frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
638•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
936•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•238 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Anti-patterns while working with LLMs

https://instavm.io/blog/llm-anti-patterns
76•mkagenius•2mo ago

Comments

sharkjacobs•2mo ago
This post is a mess. The best advice is clear and specific, and this is neither.

The examples are at best loosely related to the points they're supposed to illustrate.

It's honestly so bad that I cynically suspect that this post was created solely as a way to promote click3, in the first bullet, and then 4 more bullets were generated to make it a "whole" post

CGMthrowaway•2mo ago
The five anti-patterns (or remedies, rather)-

  1. Don't re-send info you've already sent (be resourceful) 
  2. Play to model strengths (e.g. generating an image of text vs generating text in an image, or coding/executing a string counter rather than counting a search string)
  3. Stay aware of declining accuracy as context window fills up
  4. Don't ask for things it doesnt know (e.g. obscure topics or outside cutoff window)
  5. Careful with vibe coding
meander_water•2mo ago
Yeah I was hoping for a lot more from the title.
QuadrupleA•2mo ago
Yeah, there's not a lot that's actionable here... mostly boils down to "try a lot of stuff yourself and find out what LLMs are good and bad at." Rs in strawberry, generating some specific text in Nano Banana, what knowledge it knows, etc. Don't do those specific things because obviously (?) models are bad at them.
clickety_clack•2mo ago
Maybe this guy has been searching so hard for structural issues with his LLM interaction because his writing is just so so bad.
isodev•2mo ago
Perhaps we can add that using LLMs for logical, creative or reasoning tasks (things the technology isn’t capable of doing) is an anti-pattern.
johnfn•2mo ago
I use LLMs for those purposes all the time and they seems to work for me.
qsort•2mo ago
What would be examples of tasks of that type? I hate hype as much as the next guy, but frankly I don't think you can support that assertion.
xgulfie•2mo ago
It is at the very least an anti-pattern in the same way hiring an assistant to do the work for you is an anti-pattern
shermantanktop•2mo ago
I agree, moving into management is definitely an anti-pattern.
pan69•2mo ago
I use LLMs as a sounding board for logical, creative or reasoning tasks all the time as it can provide different points of view that make ME think about a problem differently.
Yoric•2mo ago
In my experience, LLMs work pretty nicely as rubber ducks, for logical, creative or reasoning tasks. They'll make lots of mistakes, but if you know the field, they are often (not always, though) easy to detect and brush off.

Whether that's worth the environmental or social cost, of course, remains open for debate.

epolanski•2mo ago
> reasoning tasks

Please provide a definition of reasoning.

fluoridation•2mo ago
Reasoning is the manipulation and transformation of symbols (i.e. stand-ins for real objects) by well-defined rules, often with the object of finding equivalences or other classes of relationships between seemingly unrelated things.

For example, logical reasoning is applying common logical transformations to propositions to determine the truth relationship between different statements. Spatial reasoning is applying spatial transformations (rotation, translation, sometimes slight deformation) to shapes to determine their spatial relationship, such "can I fit this couch through that doorway if I rotate it in some way?"

Reasoning has the property that a valid reasoning applied to true data always produces a correct answer.

Scotrix•2mo ago
I played a lot with LLMs over the last year and built a multitude of products with it and it’s always just the same bottom line outcome: - be specific - keep it small - be precise when adding context - don’t expect magic and shift deterministic requirements to deterministic code execution layers

All this is awfully painful to manage with current frameworks and SDKs, somehow a weird mix of over-engineered stuff while missing the actual point of making things traceable and easy changeable once it gets complex (my unpopular personal opinion, sorry). So I have built something out of my own need and started to offer it (quite successfully so far)to family & friends to get a handle on it: Have a look: https://llm-flow-designer.com

pedropaulovc•2mo ago
This post resonates a lot with me. I've been experimenting with Claude Code to write code that interacts with the SolidWorks SDK. It is both extremely complex (method calls with 15+ bools, doubles, strings; abbreviated parameters; etc) and obscure. It has been like pulling teeth. Claude would hallucinate methods, parameters etc.

I tried using Context7 MCP with the SolidWorks docs but the results were not satisfactory. I ended up crawling SW's documentation in HTML, painstakingly translating it to markdown and organizing it to optimize greppability [1]. I then created a Claude skill to instruct CC to consult the docs before writing code [2]. It is still stubborn and sometimes does not obey my instructions but it did improve the quality of the code. Claude would need 5 to 10 rounds of debugging before getting code to compile. Now it gets it in 1 to 2 rounds.

[1] https://github.com/pedropaulovc/offline-solidworks-api-docs

[2] https://github.com/pedropaulovc/harmonic-analyzer/blob/main/...

willvarfar•2mo ago
I have had both frustration and joy working with AI. Just some anecdata:

I have found it is far better at understanding - and, with prodding, determining the root causes of bugs - big sprawling codebases than it is at writing anything in even simple code bases.

Recently I asked an AI to compare and contrast two implementations of the same API written in different languages to find differences, and it found some very subtle things and impressed me. It got a lot wrong, but that was because one of the implementations had lots of comments that it took at face value. I then wrote a rough spec of what the API should do and it compared the implementations to the API and found more problems. Was a learning experience for me writing specs too.

I repeated the exercise of comparing two implementations to track down a nasty one-line bug in a objc -> swift port. I wasn't familiar with the codebase, or even remember much about those languages, so it was a big boon and I didn't have to track down people who owned code until I was fairly sure that the bug had been found.

Also recently I asked an AI to compare two sets of parquet files and it did sensible things like downloading just bits of them and inspecting metadata and ended up recommending that I change some of the settings when authoring one of the sets of parquet files to dramatically improve compression. It needed esc and prodding at the halfway point but still it got there. Was great to watch.

And finally I've asked an AI a detailed question about database internals and vectorising predicates and it got talking about 'filter masks' and then, in the middle of the explanation, inserted an image to illustrate. Of 'filter masks' in the PPE sense. Hilariously wrong!

butlike•2mo ago
Can someone espouse some positive LLM use? Correlating data seems useful, but...

...I've been negative on LLM use recently. I've somewhat mentally decided an LLM is a google search which tries to make you feel good (like you're collaborating with other people), and if you strip that away, you get essentially a (admittedly decent) wikipedia search on a topic. The data correlation can give new insights, but I'm struggling to see how an LLM is creating anything _new_. If the LLM is fed it's own correlated data, it gets confused after a while (e.g. context poisoning or whatever).

So if I strip away the platitudes, isn't an LLM just a wikipedia search which gets confused after a time to most people, and a research assistant which might lie to you (by also having the context get confused) after a time to researchers?

hamasho•2mo ago

  > For example, the other day, it completely forgot about a database connection URL I had given it and started spitting someone else's database URL in the same session.
Something similar happened to me. Our team managed multiple instances for our develop environment. I was working with the instance named <product>-develop-2, and explicitly told Claude Code to use that one.

  $ aws ec2 describe-instances ...
  <product>-develop    # shared develop instance
  <product>-develop-2  # a development instance where a developer can do anything
  <product>-develop-3  # another development instance
  <product>-staging
  <product>-production
Claude used the correct instance for a while, and wrote multiple python one-off scripts to operate on it. But at some point, without any reason, it switched the target to the shared one, <product>-develop. I should have checked the code more carefully, but I didn't pay enough attention to the first few lines of dozens lines of code where all config were written, because it seemed always the same and I was mostly focuced the main function.

  import boto3
  import os
  ...
  AWS_REGION=xxx
  AWS_PROJECT=yyy
  EC2_INSTANCE=<product>-develop  # <- at some point this changed without any reason
  S3_BUCKET=zzz
  ...
  def main():  # <- all my attention is here
      # ~100 lines of code
As a result, it modified the shared instance and caused a little confusion to my team. Luckily it wasn't a big issue. But I was very scared if it targeted the production, and now I'm paying most attention to the config part rather than the main logic.
mulquin•2mo ago
Is there a way for you to split the configuration out into a separate file? That way you can tell it to explicitly ignore it altogether.
abenga•2mo ago
It should not be possible to connect to the production database from your local machine, especially if the tool you are using to write and run your code is configured by polite entreaties that can be ignored willy nilly.
hamasho•2mo ago
I mean, yes, our team and I were too lazy to set up things correctly cause we were hurried to ship some AI product only to be replaced later by OpenAI's much better version.
russfink•2mo ago
“Build me a fishing pole” instead of “catch me a fish” is the best advice this article has (sans my fishing metaphor).
1970-01-01•2mo ago
The most striking anti-pattern is still overconfident output. This is my biggest concern when working with LLMs. Calling out a mistake is hit and miss. AI doesn't seem to "care" (do a rethink) unless you bark at its glaring error several times.