frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
101•theblazehen•2d ago•22 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•189 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•549 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•38 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
48•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•113 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
328•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
286•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•276 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•4h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
59•kmm•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
4•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
251•i5heu•16h ago•194 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•444 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
144•SerCe•9h ago•133 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•9h ago•12 comments
Open in hackernews

Republican governors oppose 10-year moratorium on state AI laws in GOP tax bill

https://www.politico.com/live-updates/2025/06/27/congress/gop-govs-urge-thune-to-nix-ai-moratorium-00430083
59•MilnerRoute•7mo ago

Comments

techpineapple•7mo ago
It’s wild the dichotomy between libertarian and trad conservative in the Republican Party. You’ve got both the people who want to automate all the jobs away, and Tucker Carlson saying if FSD eliminates 2 million trucking jobs than we shouldn’t do it.
GuinansEyebrows•7mo ago
'Dark Money' [0] describes how this ideological situation came to be. pretty interesting stuff.

[0] https://en.wikipedia.org/wiki/Dark_Money_(book)

ETH_start•7mo ago
Libertarians don't believe that automation leads to fewer jobs being available. They look at the past 200 years of automation and see that as more tasks are automated, labor productivity simply increases.
apwell23•7mo ago
they simply don't care about jobs numbers. chips fall where they may
techpineapple•7mo ago
I think this has changed. Historically yes, you’re right, but I think modern thinkers either think productivity will accelerate so we can have UBI(Sam Altman), or in some cases, have a very utilitarian perspective that if we need less people, we need less people(Peter Thiel)
ETH_start•7mo ago
Economists are still unanimous on the fact that automation does not reduce employment.
leptons•7mo ago
So "states rights" doesn't really matter to these people like they've been saying it does.
thrance•7mo ago
They have no values. The only thing one can find them consistently advocating for is their own selfish interests.
shortrounddev2•7mo ago
Republicans do not have principles, only an unceasing desire for power. Any time they quote some principle at you, they are lying. They are trying to manipulate your sense of fairness to cynically get what they want. They will stab you in the back at the first opportunity. Republicans can not be trusted under any circumstances
siliconc0w•7mo ago
There are about a zillion examples of them citing a principal like, 'states rights' and then immediately abandoning it when it suits them for things like gun control, abortion access, seizing control of a state's national guard, gender affirming care, etc.

The problem is that they are directionally correct in that it would be bad to have a patch work of laws around AI but the alternative is we leave it to congress which has consistently shown an inability to thoughtfully regulate or reform anything - just pass mega spending bills and increase the debt limit.

peterhadlaw•7mo ago
"care"
mikem170•7mo ago
> it would be bad to have a patch work of laws around AI

Why would that be bad? And for who?

Wouldn't it be better to have a variety of laws around something new, and figure out over time what is optimal? Wouldn't this be better than having one set of laws that can be more easily compromised via regulatory capture? Why the common assumption that bigger and more uniform is better? Is that to encourage bigger companies and bigger profits? Has that been a good thing?

siliconc0w•7mo ago
Because if you want to sell an AI product you now need to hire an army of lawyers to do the state-by-state compliance. This dramatically increases the costs and slows down critical innovation. Another common argument is that any regulation will allow China to 'win the AI race' but I don't entirely agree with that premise - it's not a 'race' and if China 'wins' it'll be because they largely use their debt to finance effective high ROI industrial policy rather than mega tax breaks.
mikem170•7mo ago
So you favor big tech profits over the democratic process and what the local population thinks is best for themselves?

I guess it would be stifled innovation vs societal impact. Not always a cut and dry easy decision. Depends on priorities.

baby_souffle•7mo ago
> So "states rights" doesn't really matter to these people like they've been saying it does.

Nor does the deficit (and at least a dozen other big issues)

The term "performative bad faith" comes to mind...

FranzFerdiNaN•7mo ago
Yep. Conservatism only cares about one thing: protecting its own in-group while hurting the rest.
hereme888•7mo ago
_
russdill•7mo ago
? Republicans are also the ones trying to prevent states from regulating AI
Finnucane•7mo ago
All those guys lined up behind [expletive deleted] on inauguration day? What do you think they expected to get for the money they were paying out?
chisleu•7mo ago
The way I feel about this is. 1. It's going to pass 2. It's not going to get overturned by this Supreme Court 3. It's going to have the biggest impact on the world of any law.

Right now, The US and China are in an AI war. The US is doing everything it can to stop China from making progress on AI like it was a nuclear bomb. And it just might be that consequential in 10 years.

Where I am now is past the "3 sleepness nights" of 'Co-Intelligence' fame.

If you haven't seen a properly contexted (50k-100k tokens, depending on the size of the project(s)) LLM work in a code repo, then you have no idea why so many of us are terrified. LLMs are already taking jobs. My company laid off 7% of the workforce because of LLM's impact directly. I say that not because the CEO said it, but because I see it in my day to day. I'm a Principal Engineer and I just don't have need of Juniors anymore. They used to be super useful because they were teachable, and after some training you can offload more and more work to them and free up your time to work on harder problems.

With MCPs, LLMs aren't limited to the editor window anymore. My models update my JIRA tickets for me, rip content from the wiki into it's markdown memory bank which is kept in repo and accelerates everyone's work. It's connecting to databases to find out schemas and example column data. Shit, as I'm typing this it's currently deploying a new version of a container to ECR/ECS/Fargate with terraform for a little project I'm working on.

I believe we are in the very early days of this technology. I believe that over the next ten years, we are going to inundated with new potential for LLMs. It's impossible to foresee all the changes this is going to bring to society. The use cases are going to explode as each tiny new feature or new mode evolves.

My advice is to get off of the sidelines and level up your skills to include LLM integrations. Understand how they work, how to use them effectively, how to program system integrations for them... agents especially! Agents can be highly effective at many use cases. For instance, an Agent that watches a JIRA board for new tickets which contain prompts to be executed in certain repos, then executes the prompt and creates a PR for the changes. All in a context that is fully aware of your environment, deployment, CI/CD, secrets management, etc.

Anything will be possible sooner than we expect. It's going to impact the poorest people the most. A really cyberpunk reality could be upon us faster than we expect, including the starving masses stuggling to get enough to even survive.

greybox•7mo ago
You're a principle engineer who doesn't see the value in training juniors ...
hyperliner•7mo ago
This sounds like “You are a manager who doesn’t see the value in training typists” or “You are a refrigerator seller who doesn’t see the value in training icemen.”
chisleu•7mo ago
I did not say that I don't see the value in training juniors. I said that I don't have a need for them anymore. I can teach Claude in 1 API call what takes a day to walk a junior through.

Furthermore, I think we are going to find less and less work for Juniors to do because Seniors are blasting through code at a faster and faster pace now.

I'm not the only one saying that the entry level market is already getting trashed...

robomartin•7mo ago
I don't think that's what OP is saying at all.

There's a reality to content with here. We all know that software developers have been coming out of school with decidedly substandard skills (and I am being very kind). In that context, the value they might add to an organization has almost always been negative. Meaning that, without substantial training and coaching --which costs time, money and market opportunity-- they can be detrimental to a business.

Before LLM's you had no options available. With the advent of capable AI coding tools, the contrast between hiring an person who needs hand-holding and significant training and just using AI is significant and will be nothing less than massive with the passage of time.

Simply put, software development teams who do not embrace a workflow that integrates AI will not be able to compete with those who do. This is a business forcing function. It has nothing to do with not being able to or not wanting to train newcomers (or not seeing value in their training).

People wanting to enter the software development field in the future (which is here now), will likely have to demonstrate a solid software development baseline and equally solid AI-co-working capabilities. In other words, everyone will need to be a 5x or 10x developer. AI alone cannot make you that today. You have to know what you are doing.

I mean, I have seen fresh university CS graduates who cannot design a class hierarchy if their life depended on it. One candidate told me that the only data structure he learned in school was linked lists (don't know how that's possible). Pointers? In a world dominated by Python and the like, newbies have no clue what's going on in the machine. Etc.

My conclusion is that schools are finally going to be forced to do a better job. It is amazing to see just how many CS programs are just horrible. Sure, the modules/classes they take have the correct titles. What and how they teach is a different matter.

Here's an example:

I'll omit the school name because I just don't want to be the source of (well-deserved, I might add) hatred. When I interviewed someone who graduated from this school, I came to learn that a massive portion of their curriculum is taught using Javascript and the P5js library. This guy had ZERO Linux skills --never saw it school. His OOP class devoted the entire semester to learning the JUCE library...and nobody walked out of that class knowing how to design object hierarchies, inheritance, polymorphism, etc.

Again, in the context of what education produces as computer scientists, yes, without a doubt, AI will replace them in a microsecond. No doubt about it at all.

Going back to the business argument. There is a parallel:

Companies A, B and C were manufacturing products in, say, Europe. Company A, a long time ago, decides they are brilliant and moves production to China. They can lower their list price, make more money and grab market share from their competitors.

Company B, a year later, having lost 25% of their market share to company A due to pricing pressure, decides to move production to China. To gain market share, they undercut Company A. They have no choice on the matter; they are not competitive.

A year later A and B, having engaged in a price war for market share, are now selling their products at half the original list price (before A went to China). They are also making far less money per unit sold.

Company C now has a decision to make. They lost a significant portion of market share to A and B. Either they exit the market and close the company or follow suit and move production to China.

At this point the only company one could suggest acted based on greed was A during the initial outsourcing push. All decisions after that moment in time were about market survival in an environment caused by the original move.

Company C decides to move production to China. And, of course, wanting to regain market share, they drop their prices. Now A, B and C are in a price war until some form of equilibrium is reached. The market price for the products they sell are now one quarter what they were before A moved to China. They are making money, but it is a lot tighter than it used to be. All three organizations had serious reorganization and reductions in the labor force.

The AI transition will follow exactly this mechanism. Some companies will be first movers and reap short-term benefits of using AI to various extents. Others will be forced into adoption just to remain competitive. At the limit, companies will integrate AI into every segment of the organization. It will be a do or die scenario.

Universities will have to graduate candidates who will be able to add value in this reality.

Job seekers will have to be excellent candidates in this context, not the status quo ante context.

lazyeye•7mo ago
It's more than this.

You may think your job's not at risk because you're a plumber. But you're not realising that you will be competing with millions of new plumbers fleeing AI-decimated industries pushing down wages dramatically.

And what if China wins on AI and now Huawei can produce tech gear that is dramatically superior/cheaper to global competitors. So now Chinese tech dominates the globe giving enormous power and control to the the CCP.

chisleu•7mo ago
Absolutely right.
shadowgovt•7mo ago
This is only a problem if we structure our society so that millions of new hands available to handle plumbing jobs makes people's lives worse, not better.

As always, it's on us if we don't make freeing up people from work a machine can do an opportunity to improve everyone's life.

norir•7mo ago
I would love to see a reverse atlas shrugged where all the programmers just stopped working and we could see how much the executive class could do without them through the magic of ai. As it stands, I feel most workers are increasingly facilitating their own dispossession.