frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
611•klaussilveira•12h ago•180 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
915•xnx•17h ago•545 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
28•helloplanets•4d ago•22 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
102•matheusalmeida•1d ago•24 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
35•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
212•isitcontent•12h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
5•kaonwarb•3d ago•1 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
206•dmpetrov•12h ago•101 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
316•vecti•14h ago•140 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
355•aktau•18h ago•181 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
361•ostacke•18h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
471•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
267•eljojo•15h ago•157 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
398•lstoll•18h ago•271 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
82•quibono•4d ago•20 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
54•kmm•4d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
9•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
242•i5heu•15h ago•183 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
51•gfortaine•10h ago•16 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
138•vmatsiiako•17h ago•60 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
275•surprisetalk•3d ago•37 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•11h ago•13 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1052•cdrnsf•21h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
127•SerCe•8h ago•111 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
173•limoce•3d ago•93 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
7•jesperordrup•2h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
61•rescrv•20h ago•22 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
17•neogoose•4h ago•9 comments
Open in hackernews

xAI dev leaks API key for private SpaceX, Tesla LLMs

https://krebsonsecurity.com/2025/05/xai-dev-leaks-api-key-for-private-spacex-tesla-llms/
244•todsacerdoti•9mo ago

Comments

Cheer2171•9mo ago
What absolute incompetence. Not just on this dev, but any org with API keys ought to be scanning for leaked keys constantly. Failure of one and failure of many.

Of course Elon hires only based on 'merit'...

Everdred2dx•9mo ago
How would you scan for your api keys on repos outside of your organization? I assumed this was a dev’s personal repo.
kalkin•9mo ago
https://docs.github.com/en/code-security/secret-scanning/sec... is one option
Everdred2dx•9mo ago
Neat. Thanks!
squigz•9mo ago
Well 1 option is the service from TFA.

https://www.gitguardian.com/monitor-internal-repositories-fo...

romellem•9mo ago
The company I work for does this. I recently pushed an update to a personal repo that just contained a keyword match (the push included a dictionary.txt file which happened to include the company name) which flagged a review.
mcdwayne•9mo ago
This was on public GitHub, which anyone can scan for anything. Their API is a firehose you can consume: https://api.github.com/events

GitGuardian's public report on secrets sprawl talks about their methodology of scanning any commit https://www.gitguardian.com/state-of-secrets-sprawl-report-2...

_ea1k•9mo ago
Musk has been talking about integrating Grok into Tesla cars and also adding a lot of space and rocketry specific training. It is completely possible that these models were trained on data that would logically be public at some point.

It is also possible that the author's guess is right and that these were to contain sensitive data.

Noone really knows, but honestly, these kinds of mistakes are happening all the time. Who hasn't accidentally leaked their own .ssh dir on github? lol

hackernewds•9mo ago
Any competent engineer hasn't?
ffsm8•9mo ago
Is that even at the competent level? You need to be particularly special to actually "accidentally" leak the .ssh dir via GitHub. Even incompetent people wouldn't fail to that degree for the most part.

Leaking the directory through other avenues is a different matter though. Almost all package managers provide post install and compile scripts. Hence doing (as an example) "npm install" can potentially leak it. That's something not many people actually pay attention to (you would have to basically jail every command, which sadly isn't the norm today)

unsupp0rted•9mo ago
I only use private repos, so that when my .ssh and .env leaks the public doesn’t see it. Probably. Maybe. Well…
CER10TY•9mo ago
Just remember to go through your commit history if you ever plan on making that repo public.
Normal_gaussian•9mo ago
I commonly flatten repos (by copy and create) when I share them. Its rare that the other person needs the commit history.

I have often thought it would be nice to have a good tool to retroactively view and tidy them, but everything I've seen has not quite hit the nail on the head.

unsupp0rted•9mo ago
I use the Pieter Levels commit history strategy of all my commit messages being the single word "commit"

https://x.com/levelsio/status/1590908364393156608

thephyber•9mo ago
Git implemented a `.gitignore` file for this exact purpose. One of the first things to do when you create a new repo is to customize if for the language + OS.
unsupp0rted•9mo ago
And .env is implemented for this exact purpose too, hand-in-hand with .gitignore ;)

But mistakes happen all the item. It's very easy to fat-finger a line in .gitignore - one char off and you're toast.

Zanfa•9mo ago
How would you accidentally leak your .ssh dir on Github?
tazjin•9mo ago
People with workflows like `git add .; git commit -m 'fix'` can push wondrous things to public repos.
SilverBirch•9mo ago
Only if you're raw dogging git from your home directory...
everforward•9mo ago
You would have to have a git repo in .ssh or higher up the tree for that to work. Otherwise you’d get one of the “directory is not a repo” messages.
_ea1k•9mo ago
It isn't that uncommon to sync a home dir with git: https://askubuntu.com/questions/1316229/is-it-bad-practice-t...

I'd guess that most of us wouldn't do it by just "git init" in the home directory. There are many safer ways than that.

But we were all newbs once, and often even the newbs have access to various keys and credentials.

_ea1k•9mo ago
It was just an example. It used to be fairly common for people to sync some of their dotfiles via git, and from time to time someone would leak a directory that contained sensitive data without them realizing it. I'd guess things like tokens used by cli tools were more common than whole .ssh directories, but I'm sure both happened.

Not quite the same thing, but also a leak: https://blog.gitguardian.com/github-exposed-private-ssh-key/

I guess all these folks saying professionals would never make a mistake like this will also have insulting names for github engineers. :shrug

Dlemo•9mo ago
I have not.

And at a certain level of criticality, you do not do this at all

You have security measures in place to prevent this.

Not that the ketaman cares about it.

mcs5280•9mo ago
SpaceX data LLM being exposed is likely a recipe for a huge ITAR violation
pavlov•9mo ago
DOGE has probably fired everyone who could pursue those penalties.
foota•9mo ago
Is ITAR like other compliance sort of fields where you have to store data only in compliant places, or is it just based on actual leaks etc.,?
freeone3000•9mo ago
ITAR (International Trafficking in Arms Regulation) is paranoid. Every single specific person that knows even dual-use information, such as composite wing design, must be individually authorized. I’ve been asked to leave the room when my girlfriend, who works for a passenger aircraft manufacturer, was designing a repair for a plane I have literally flew on.

It doesn’t matter how the person got access to dual-use info, like basically everything to do with large rockets, it’s 100% forbidden.

dismalpedigree•9mo ago
This seems like a company policy more than ITAR. Unless you are not a US citizen, then it could be ITAR.
freeone3000•9mo ago
We’re in Canada, and I’m a US citizen, but she is not.
foota•9mo ago
That's the people aspect of it, but what about the technical aspect of it? Can I store ITAR restricted information in plaintext on a thumb drive if I think it's safe?
NitpickLawyer•9mo ago
If there's actually any proprietary rockety data, maybe. Without knowing what data went into the fine-tune there's no way to tell. This could be a "internal procedures chatbot" or an "onboarding chatbot" where new people can ask where the coolest watercooler in the company is.

In my experience post-training mainly deals with "how" the model displays whatever data ("knowledge") it spits out. Having it learn new data (say the number of screws on the new supersecretengine_v4_final_FINAL (1).pdf) is often time hit and miss.

You'd get much better results with having some sort of RAG / MCP (tools) integration do the actual digging, and the model just synthesising / summarising the results.

mewse•9mo ago
Or, since we're apparently playing the game of maybes in this thread, maybe the LLM was only trained on the teams grandmothers' spaghetti recipes, so that new hires can learn to make the best bolognese sauce.
ben_w•9mo ago
This being Musk, it wouldn't surprise me.

I mean, consider The Boring Company sell a "flamethrower" despite being theoretically about… boring.

tomalbrc•9mo ago
Because Tesla is making.. coils?
jsjohnst•9mo ago
Boring as the noun, not adjective. Also, Tesla was named that before Musk was involved, so it’s not his humor involved in naming both. Nikola Tesla is known for a lot more than just Tesla coils.
lesuorac•9mo ago
I think you missed a lot of the word play. Somebody else has explained Bore[1]-ing vs boring.

But they sold a blowtorch aka not a flamethrower. The difference being a flamethrower actually "throws flames" like 10+ feet.

[1]: https://en.wikipedia.org/wiki/Bore

ben_w•9mo ago
I didn't miss anything in the wordplay*, it was obvious. (As are the initials, an extra pun).

I put quotemarks around "flamethrower" because that's what it was originally sold as before obvious and predictable legal issues with real flamethrowers and the fact it was obviously mimicing the prop in Spaceballs.

My point is: neither weed burners nor actual flamethrowers have anything to do with digging tunnels nor any adjacent aspect of civil engineering.

* https://en.wikipedia.org/wiki/Boring_(manufacturing)

thejazzman•9mo ago
Tesla sold a surfboard and whiskey and...

Just saying. It's kinda on brand by not being on brand because the whole network of companies... well I'm trying off topic.

KristenDev•9mo ago
you haven't seen the releases labeled as Groks new studies yet... Its pretty clear.
ActorNightly•9mo ago
It is, but the issue is the current administration has made it abundantly clear that it doesn't care about anything legal.
Aurornis•9mo ago
> Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

Having the security team redirect the report to the HackerOne program is wild.

At least someone had enough thought to eventually forward it to someone who could fix it.

fweimer•9mo ago
It's come up before. HackerOne is not intended as a replacement for a PSIRT front desk, but many companies use it as such. It looks like Paypal still does this, for example.
KristenDev•9mo ago
Contacted the support team, DOD and FBI... nothing done a month or two ago.. its sad. But when see that studies are now sci-fi flicks.. my heart broke a little while ago. Never mind that this was swept under the radar by the DDOS attacks. Classic Oceans15 movie in the making.
rcarmo•9mo ago
One thing that sticks out to me is that there is an incorrect assumption from the journalists that having the API keys to an LLM can lead to injecting data.

People still don’t know how LLMs work and think they can be trained by interacting with them at the API level.

skissane•9mo ago
> People still don’t know how LLMs work and think they can be trained by interacting with them at the API level.

Unless they are logging the interactions via the API, and then training off those logs. They might assume doing so is relatively safe since all the users are trustworthy and unlikely to be deliberately injecting incorrect data. In which case, a leaked API key could be used to inject incorrect data into the logs, and if nobody notices that, there’s a chance that data gets sampled and used in training.

rcarmo•9mo ago
Nobody really trains directly from logs without curation and filtering.
skissane•9mo ago
Sure, but there is a non-zero risk that some malicious data could slip through the curation and filtering processes undetected

I agree that’s unlikely, but not astronomically unlikely

rcarmo•9mo ago
Considering the costs involved in fine-tuning, nobody does it unless they are a very rich corporation. And certainly not for public-facing models…
drilbo•9mo ago
unless I somehow skimmed over it, they only appear to refer to "prompt injection"
AmazingTurtle•9mo ago
Guess who's going to be fired by elon :D
endofreach•9mo ago
> Guess who's going to be fired by elon :D

i know, you probably just meant it as a fun comment. but i don't get how this is funny. this person probably relies on income, might have a family to feed... and just made a mistake. a type of mistake, that is not uncommon. i mean i have seen corporate projects where senior engineers didn't even understand why committing secrets might be a bad idea.

yes, of course, as a engineer you have responsibilities and this is clearly an error. but it also says a lot about the revolutionary AIs that will apparently replace all engineers... but the companies claiming it are not using it to catch stuff like this.

and let's keep in mind– i am surely not the only one making this experience: every single time i am using an LLM for code generation, i have to remove hardcoded secrets and explicitly show them how to do it. but even then, it starts to suggest hardcoding sensitive info here and there. which means: A. troublesome results made by these models, presented to inexperienced engineers. and people are conditioned to believe in the superiority of LLM code, given all the claims in the media. but also B: that models suggest this practice, shows just how common this issue is.

yes, this shouldn't happen at any company. but these AI companies with their wild claims should put their money where their mouth is. if your AI is about to replace X many engineers, why is it not supervising at least commits? to public repos? why are your powerful, AGI-agentic autonomous supernatural creations not able to regex the sh outta it? could it be that they don't really believe their own tales? or do they believe, but not think?

of course, an incident like this could lead to attempts of turning it into a PR-win– claiming something like "see, this would have never happened with/to our Almighty Intelligence. that's why it should replace your humans." but then: if you truly believe it and have already invested so much resources, you believe to foresee the future so surely, why ignore the obvious? or are is this silent, implicit testimony, that you got caught up in a hype-train and got brainwashed into thinking, that code generation is what makes a good engineer? (just to be safe: i am not saying LLMs are not useful).

also: that something this could even happen at a company like that, is not the fault of one engineer. it indicates either bad architecture or conventions and/or bad practice and culture... and... a l s o: no (human) code review process in place?

the mistake was made by one engineer, yes. but as though it's made to seem like this mistake is the root... it's not. the mistake is a symptom, not the cause.

i honestly hope the engineer does not get fired. and i really don't understand this mentality. if this person is actually good at their job and takes it seriously, it's certain: he or she is not going to leak a secret again. someone who replaces him or her, might.

tasuki•9mo ago
> if this person is actually good at their job and takes it seriously, it's certain: he or she is not going to leak a secret again

If they were good at their job, they wouldn't have leaked the secret in the first place. The correct workflow is to:

1. Create commits that only change do one thing. Not possible to "forget" there were secrets added alongside another feature.

2. When adding secrets, make sure they're encrypted or added to the project's `.gitignore` equivalent.

I'm so sorry for a first-world engineer incompetent enough to commit a secret in a GitHub repository. They'll probably have to downsize from their mansion to a regular house. Meanwhile in the third world, many more competent people are starving or working some terrible menial job because they didn't have the right opportunities in life...

Timber-6539•9mo ago
I'll do you one better. Start your .gitignore file with this line

  *
lsaferite•9mo ago
Mine all start with (and .dockerignore has a similar one)

    # Default block all
    /*
    # Specifically allow files and directories
consp•9mo ago
In a vacuum, sure. But in a workplace this workflow is best practice at best and even gets ignored. I've been able to accidently add a secret despite scans and I noticed it myself so it was quickly fixed. Still resulted in a discussion of how to prevent it in the future as nothing is perfect and you learn from mistakes.

Or you don't by simply firing the engineer and assume everyone in the entire workflow is perfect.

everforward•9mo ago
This sounds like naivety to me. I would bet most people here have committed a secret, even if it was later caught in a code review. If this wasn’t a common issue, all those tools that scan repos for secrets wouldn’t exist.

I once put secrets on a wiki page because I copied log snippets and a third party library naively dumped HTTP headers into the logs without filtering out their own API key. I shouldn’t have assumed the logs were secret free, but it’s also not an unreasonable assumption.

7bit•9mo ago
If you ever visit a Bill Burr show, let me know. I wouldn't want to miss it.
endofreach•9mo ago
Big fan of bill burr. I don't get how some here don't understand what my comment is about. I assume your implication is that is have no sense of humour or am too snowflaky. I mean, next time you visit a bill burr show, let me know if his punchline is such a banger like the one i commented on. And if you think this is the same type of humour, please, let me know when you visit a bill burr show next!

But, my comment was clearly not about making excuses for the mistake of the engineer. I wanted to express that it's insane that such a common mistake can happen in a company like that. And i don't get how people let the ceos & leads off the hook so easily.

But some apparently don't think that way.

In my opinion: the mistakes that are common, and severe, and very easy to avoid, have to be expected and hence circumvented through industry standard behaviour. And that is not (solely) the responsibility of one committing engineer. Any good team has best practices to prevent these type of basic, potentially fatal mistakes from happening, and usually at least a glance-over review process where these mistakes should be found by another team member on first sight... and now, when it's an "AI making devs extinct"-type of company... and they're not catch this type of error, is ridiculous. That an individual can screw up something potentially so critical, is an organizational failure.

But anyway, i think my points were clear in the first comment already.

7bit•9mo ago
It was clearly a joke and that is not the best place to come down with a morality club. It has soapbox vibes and the person who made the joke also hasn't earned that.
hnthrow90348765•9mo ago
The real mistake is working for Elon
breakingcups•9mo ago
I'm much more interested in what the private model "tweet-rejector" could be used for...
yk•9mo ago

    if "Musk" in tweet and xAi.grok.sentiment(tweet) < .5: 
        reject(tweet)
SillyUsername•9mo ago
You mean ex-AI dev surely?
threecheese•9mo ago
The biggest surprise to me was: “administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda”. I understand that there’s no expectation of privacy at work (especially in govt), and everything you write is “on the record”; however an employer monitoring comms for what’s essentially thoughtcrime is heinous. Isn’t disagreement healthy?
erulabs•9mo ago
Yes but it's worth understanding the executive branch (which according to this administration includes all federal agencies), in its constitutional form, is more or less just an extension of the president. Conceptually they all "perform at the pleasure of" the person of the president. The "balance" and "disagreement" can happen outside of the executive, in the legislative or judicial branches.

Definitely not how I would run an organization (even a military organization), but it's not _conceptually_ wrong. If you were a general and you had lieutenants expressing "hostility to" your agenda, would you keep them on? Again, I'd probably say yes up to a limit, but it's not outside of a generals purview to concern themselves with this.

threecheese•9mo ago
Fair, and valid. I suppose in the case of a normal organization, leadership’s constraint is keeping enough good employees to be commercially viable. If your goal is to reduce the size and scope of your organization no matter the impact, that constraint is irrelevant.
KristenDev•9mo ago
This has ruined many careers in the making. The DDOS attacks happened while this breach like hotspot was open. who do we contact if any of our studies are leaked out like a publicity stunt day in day out and the x.ai hasnt responded for months after stating concern and rogue like actions on different AI services. Do we post videos, make statements or just gather and share tips and insight?