frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•45s ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
2•sohimaster•2m ago•0 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
2•harshalone•3m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•8m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•8m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•9m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•9m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•11m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•11m ago•0 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
5•c420•12m ago•0 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•12m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
1•HotGarbage•12m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•12m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•14m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
3•surprisetalk•17m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•18m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•19m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
8•doener•20m ago•2 comments

MyFlames: View MySQL execution plans as interactive FlameGraphs and BarCharts

https://github.com/vgrippa/myflames
1•tanelpoder•21m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•21m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•22m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•23m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•26m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•27m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•31m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•31m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•32m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•32m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•32m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•32m ago•0 comments
Open in hackernews

CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code

https://www.legitsecurity.com/blog/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code
235•greyadept•3mo ago

Comments

stephenlf•3mo ago
Wild approach. Very nice
adastra22•3mo ago
A good vulnerability writeup, and a thrill to read. Thanks!
deckar01•3mo ago
Did the markdown link exfil get fixed?
runningmike•3mo ago
Somehow this article feels like a promotional for Legit. But all AI vibe solutions face the same weaknesses. Limited transparency and trust Issues: Using non FOSS solutions for cybersecurity is a large risk.

If you do use AI cyber solutions, you can be more vulnerable for security breaches instead of less.

xstof•3mo ago
Wondering if the ability to use hidden (HTML comment) content in PRs would not remain a nasty issue: especially for open source repos?! Was that fixed?
PufPufPuf•3mo ago
It's used widely for issue/PR templates, to tell the submitter what info to include. But they could definitely strip it from the Copilot input... at least until they figure out this "prompt injection" thing that I thought modern LLMs were supposed to be immune to.
fn-mote•3mo ago
> that I thought modern LLMs were supposed to be immune to

What gave you this idea?

I thought it was always going to be a feature of LLMs, and the only thing that changes is that it gets harder to do (more circumventions needed), much like exploits in the context of ASLR.

PufPufPuf•3mo ago
PR releases. Yeah, it was an exaggeration, I know that the mitigations can only go so far.
munchlax•3mo ago
So this wasn't really fixed. The impressive thing here is that copilot accepts natural language. So whatever exfiltration method you can come up with, you just write out the method in english.

They merely "fixed" one particular method, without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice? Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?

There's a ton of stuff to be found here. Do they give bounties? Here's a goldmine.

lyu07282•3mo ago
> GitHub fixed it by disabling image rendering in Copilot Chat completely.
oefrha•3mo ago
To supplement the parent, this is straight from article’s TLDR (emphasis mine):

> In June 2025, I found a critical vulnerability in GitHub Copilot Chat (CVSS 9.6) that allowed silent exfiltration of secrets and source code from private repos, and gave me full control over Copilot’s responses, including suggesting malicious code or links.

> The attack combined a novel CSP bypass using GitHub’s own infrastructure with remote prompt injection. I reported it via HackerOne, and GitHub fixed it by disabling image rendering in Copilot Chat completely.

And parent is clearly responding to gp’s incorrect claims that “…without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice?” I’m sure there will be more attacks discovered in the future but gp is plain wrong on these points.

Please RTFA or at least RTFTLDR before you vote.

munchlax•3mo ago
Take a chill pill.

I did, in fact, read the fine article.

If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"

Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN? Would you? I don't have a premium account, nor will I ever pay microsoft a single penny. If you actually want something you can try for yourself, go find someone else to do it.

Just to make it clear for you, I was musing on the chord of being able to write out the steps to exploitation in plain english. Since the dawn programming languages, it has been a pie-in-the-sky idea to write a program in natural language. Combine that with computing on the server end of some major SaaS(s) and you can bet people will find clever ways to circumvent safety measures. They had it coming and the whack-a-mole game is on. Case in point TFA.

lyu07282•3mo ago
> If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"

They use "camo" to proxy all image urls, but they in fact did remove the rendering of all inline images in markdown, removing the ability to exfil data using images.

> Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN?

You just didn't make it very clear that you discovered some other unknown technique to exfil data. Might I encourage you to report what you found to Github?

https://bounty.github.com/

munchlax•3mo ago
I'm not sure how you could arrive at the conclusion that I've discovered any technique involving copilot whatsoever.

Feel free to spout more nonsense. I was somewhat puzzled and dismayed at first, but now it amuses me.

lyu07282•3mo ago
Because we know exactly what you did and the whole copilot team is laughing at you now! The base64 encoded source code you md5 hashed into our mainframe, you know what you did there is no denying it now. You are on thin ice buddy!
tomalbrc•3mo ago
What the fuck?
lyu07282•3mo ago
Read the thread. It's a joke, when talking to an angry lunatic I always like to fight fire with fire.
Thorrez•3mo ago
>Surely you could just do the base64 thing to an image url of your choice?

What does that mean? Are you proposing a non-Camo image URL? Non-Camo image URLs are blocked by CSP.

>Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?

Does the agent have internet access to be able to perform a fetch? I'm guessing not, because if so, that would be a much easier attack vector than using images.

nprateem•3mo ago
You'd have to be insane to run an AI agent locally. They're clearly unsecurable.
djmips•3mo ago
can you still make invisible comments?
RulerOf•3mo ago
Invisible comments are a widely used feature. Often done inside of PR or Issue templates to instruct users how to include necessary info without clogging up the final result when they submit.
charcircuit•3mo ago
The rule is to operate using the intersection of all the users permissions of who is contributing text to the LLM. Why can an attacker's prompt access a repo the attacker does not have access to? That's the biggest issue here.
kerng•3mo ago
Not the first time by the way. GitHub Copilot Chat: From Prompt Injection to Data Exfiltration https://embracethered.com/blog/posts/2024/github-copilot-cha...
glitchdout•3mo ago
And it won't be the last.
MysticFear•3mo ago
Can't they just have the Copilot user permission to be readonly from the current repo.
mediumsmart•3mo ago
I can't remember the last time I leaked private source code with copilot.
isodev•3mo ago
I’m so happy our entire operation moved to a self hosted VCS (Forgejo). Two years ago, we started the migration (including client repos) and not only we saved tones of money on GitHub subscriptions, our system is dramatically more performant for the 30-40 developers working with it every day.

We also banned the use of VSCode and any editor with integrated LLM features. Folks can use CLI based coding agents of course, but only in isolated containers with careful selection of sources made available to the agents.

hansmayer•3mo ago
Just out of interest, what is your alternative IDE?
isodev•3mo ago
That depends a bit on the ecosystem too.

For editors: Zed recently added the disable_ai option, we have a couple of folks using more traditional options like Sublime, vim-based etc (that never had the kind of creepy telemetry we’re avoiding).

JetBrains tools are OK since their AI features are plugin based, their telemetry is also easy to disable. Xcode and Qt Creator are also in use.

belter•3mo ago
Did you look at VSCodium ?

https://vscodium.com/

aitchnyu•3mo ago
What do your CLIs connect to? To first-party OpenAI/Claude provider or AWS Bedrock?
isodev•3mo ago
Devs are free to choose, provided we can vet the model prover’s policy on training on prompts or user code. We’re also careful not to expose agents to documentation or test data that may be sensitive. It’s a trade off with convenience of course, but we believe that any information agents get access to should be a conscious opt-in. It will be cool if/when self hosting claude-like LLMs becomes pragmatic.
aitchnyu•3mo ago
What do you think about AWS Bedrock with Sonnet/R1/Qwen3?
frumplestlatz•3mo ago
Banning VSCode — instead of the troublesome features/plug-ins — seems like a step too far. VSCode is the only IDE that supports a broad range of languages with poor support elsewhere, from Haskell to Lean 4 to F*.

I work at a major proprietary consumer product company, and even they don’t ban VSCode. We’re just responsible for not enabling the troublesome features.

trenchpilgrim•3mo ago
> VSCode is the only IDE that supports a broad range of languages with poor support elsewhere

I just checked Zed extensions and found the first two easily enough. The third I did not, since they don't seem to have a language server, just direct integrations for vim/emacs/vsc.

frumplestlatz•3mo ago
Not all the integrations are equal in quality/usability, and in the case of F*, the VSCode extension is by far the most advanced.

I switch between Emacs, VSCode, JetBrains IDEs, and Xcode regularly depending on what I am working on, and would be seriously annoyed if I could not use VSCode when it is most useful.

elevation•3mo ago
With 30-40 devs each pulling a repository to their local machine, how do you prevent even one of them from accidentally exposing the entire repo to an LLM instead of “selected sources”?

And if a user were reluctant to tell you (fearing the professional consequences) how would you detect that a leak has happened?

oncallthrow•3mo ago
> I spent a long time thinking about this problem before this crazy idea struck me. If I create a dictionary of all letters and symbols in the alphabet, pre-generate their corresponding Camo URLs, embed this dictionary into the injected prompt,

Beautiful

j45•3mo ago
I wonder sometimes if all code on Github private or not is ultimately compromised somehow.
twisteriffic•3mo ago
This exploit seems to be taking advantage of the slow token-at-a-time pattern of LLM conversations to ensure that the extracted data can be reconstructed in order? Seems as though returning the entire response as a single block could interfere with the timing enough to make reconstruction much more difficult.
arielcostas•3mo ago
What if you made it generate a URL with each character-position instead of just the character? For example, instead of making `hacked` be `0.0.0.0/h`, `0.0.0.0/a` and so on; it invokes `0.0.0.0/1-h`, `0.0.0.0/2-a`... that way you can sort them and delete any duplicate calls
musicale•3mo ago
No one could possibly have predicted this.
zastai0day•3mo ago
Yikes. I knew these AI coding tools were sketchy! Leaking private source code is a massive failure. Who would trust Copilot with their company's secret sauce after this? Just goes to show you can't blindly trust big tech.