frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
475•klaussilveira•7h ago•116 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
813•xnx•12h ago•487 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
33•matheusalmeida•1d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
157•isitcontent•7h ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
156•dmpetrov•7h ago•67 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
92•jnord•3d ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
50•quibono•4d ago•6 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
260•vecti•9h ago•123 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
207•eljojo•10h ago•134 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
328•aktau•13h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
327•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
411•todsacerdoti•15h ago•219 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
23•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
337•lstoll•13h ago•242 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
52•phreda4•6h ago•9 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
4•romes•4d ago•0 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
195•i5heu•10h ago•145 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
115•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
152•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
245•surprisetalk•3d ago•32 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
996•cdrnsf•16h ago•420 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
26•gfortaine•5h ago•3 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
46•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
67•ray__•3h ago•30 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
30•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
7•gmays•2h ago•2 comments

Evolution of car door handles over the decades

https://newatlas.com/automotive/evolution-car-door-handle/
41•andsoitis•3d ago•62 comments
Open in hackernews

Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs

https://www.theregister.com/2025/06/08/chatterbox_labs_ai_adoption/
84•dijksterhuis•8mo ago

Comments

neepi•8mo ago
One of my former contract outfits is there right now. Two failed projects so far, one of which impacted customers so badly that they ended up in trade press. The other one wrote off 5% of revenue with nothing to show.

No you can't solve everything with a chatbot because your CEO needs an AI proposition or he's going to look silly down the golf course with all the other CEOs that aren't talking about how theirs are failing...

tough•8mo ago
this does make sense, but there's infinite n ways to use AI on the workplace, i gotta wonder how much bad consultants trying to just sell services are to blame here. at least as much as the CEO's trying to shoehorn products nobody asked for i guess
neepi•8mo ago
Yes. They don't know what AI actually is and what the capabilities are and the companies selling integrations are running on hope rather than technical competence and suitability. So it gets applied to unsuitable problem domains and fails.
tough•8mo ago
I hate consultants, their incentives are all whack from the beginning.

Hopefully more companies will encourage their own employees to explore how AI can fit on their current workflows or improve them and not try to hope for some magical thinking to solve their problems

SirBomalot•8mo ago
I currently have to deal with such consultants. They want to sell their magical AI black box.

Speaking with the consultants let's me assume that they too get the pressure from the top to do ai stuff maybe because they fear that else they will be replaced by ai or so. It seems really somewhat desperate.

matt3210•8mo ago
They vibe coded everything so it basically a second year cs student level of work and security.
delusional•8mo ago
I have seen no consultants directly selling this yet. To me it looks like this is all coming at the CEO "organically", or at least through the same channels that it's coming to the rest of us.

At my job it's been coming through the regular channels, but is empowered by aligning with current trends. It's easier to sell an AI project, even internally, when the whole world is talking about it.

tough•8mo ago
right it feels like its more a pull than push, but what i meant is that any of the big consultancies are happy to take customers with -absurd- requests, and not finish the job, cause they paid regardless.

maybe they're not directly pushing AI (cause they dont need to), but they're happy to accept shitty jobs that make no sense just cause

delusional•8mo ago
> right it feels like its more a pull than push

I don't think that's the right distinction to draw here. It's definitely being pushed, just not by consultants.

> big consultancies are happy to take customers with -absurd- requests

This is of course always true. Consultants usually don't really care where they make the money, and long as you pay them, they'll find someone stupid enough to take on your task.

That's not what I'm seeing though. We're not hiring outside consultants to do big AI projects, we have people within our organization that have been convinced by the public marketing, and are pushing for these projects internally. I'm not seeing big consultancies accepting contracts, I'm seeing normal working people getting consultant brain and taking this as their chance to sell a "cutting edge" project that'll disrupt all those departments they don't understand what does.

tough•8mo ago
greenfield projects have always been a way to be -seen- in big corps i guess.

AI is now the du jour's vector to get an easy YES from command.

Sad state of affairs i guess, at least put the effort to know wtf you want to build and more importantly WHY or HOW is it better than current solutions

prmoustache•8mo ago
How does that change compared to anything else consultancies are paid for?
csomar•8mo ago
They are roughly as bad as the "blockchain" consultants who want to install a blockchain to your company. The value is in the sale. This is why they have zero technical expertise.
nikanj•8mo ago
It's a match made in heaven, with a buyer who just wants to report to the board that they have successfully invested in $fad, and a seller who knows the buyer is mostly motivated by the opportunity to put money towards $fad.
ben_w•8mo ago
Ah, a monorail project.

(Simpsons kind, I don't know enough about civil engineering to comment on the real one).

steveBK123•8mo ago
Bad consultants exist to facilitate bad CEOs/CTOs.

"I have to do some __ / have a __ strategy / hire a Head Of __ or I look bad"

blitzar•8mo ago
We are selling to willing buyers at the current fair market price.
steveBK123•8mo ago
To a degree, yes.

There are a lot of leaders who are looking for problems for their solutions.

edit: I say this as someone who has been stuck on top-down POCs which I late found out originated from "so my brother in law has this startup" where we got management questions that were mostly "so how could we use this here?" rather than "how is it performing / is it a good value / does it solve the problem we want it to solve".

Some tech cannot fail, it can only be failed.

EndsOfnversion•8mo ago
You will never sell anything to any of those people ever again.
steveBK123•8mo ago
This is it! I'm telling you! This is it!
arethuza•8mo ago
But that is spilt milk under the bridge.
blitzar•8mo ago
Please, speak as you might to a young child, or a golden retriever.
matt3210•8mo ago
Who in their right mind would intentionally deploy non-deterministic, unreviewable and unprovable software to critical systems?
lo0dot0•8mo ago
The answers can be recorded and reviewed. The other points are true, or is there a way to make outcomes deterministic, when compared to previous versions while allowing to add more knowledge in newer versions?
vintermann•8mo ago
It's possible to make any model deterministic. Used to be just to save the seed, but I'm not sure it still is now that everything is distributed. Maybe a little more effort.
lo0dot0•8mo ago
A part of my question that you didn't go into was, can new knowledge be added in a new version without making the answers with knowledge learned in previous versions non-deterministic?
dijksterhuis•8mo ago
that’s not really how training works.

changing the input (data) means you get a different output (model).

source data has nothing to do with model determinism.

as an end-user of AI products, your perspective might be that the models are non-deterministic, but really it’s just different models returning different results … because they are different models.

“end-user non-determinism” is only really solved by repeatedly using the same version of a trained model (like a normal software dependency), potentially needing a bunch of work to upgrade the (model) dependency version later on.

dustingetz•8mo ago
determinism isn’t really enough, we want “predictable”. Most of these AI wavefunctions are “chaotic” - tiny changes in state can cause wildly divergent outcomes
Yoric•8mo ago
But that won't survive an upgrade, will it?
kevingadd•8mo ago
This requires an exact lock-down of things like the hardware and driver version, doesn't it? Is that sustainable?
vintermann•8mo ago
It shouldn't. It didn't used to, at least.
smodo•8mo ago
My colleagues at the head of a company. I’m one of four bosses. One of us is pushing for AI every single meeting. The other is ignoring her. The last one is starting to ‘see her point.’ I’m considering quitting if this goes to far but unwilling to make that threat yet, as it’s a bridge I can only cross once.

Anyway. To me it just speaks to the disdain for semi-intellectual work. People seem to think producing text has some value of its own. They think they can shortcircuit the basic assumption that behind every text is an intention that can be relied upon. They think that if they substitute this intention with a prompt, they can create the same value. I expect there to be some kind of bureaucratic collapse because of this, with parties unable to figure out responsibility around these zombie-texts. After that begins cleaning up, legislating and capturing in policy what the status of a given text is etc. Altman &co will have cashed out by then.

dustingetz•8mo ago
the essence of man is blind spots and hubris
mirekrusin•8mo ago
It's interesting to still hear this kind of sentiment.

> People seem to think producing text has some value of its own.

Reading this sentence makes me think that the author actually never seen agentic work in action? Producing value out of text does work and one of good examples is putting it in a loop with some form of verification output. It's easy to do with programming - type checker, tests, linter etc. – so it can chat by itself with it's own results until the problem is solved.

I also find it personally strange that so often discussions require reminder that rate of change in capabilities is also big part of "the thing" (as opposed to pure capabilities today). It changes on weekly/monthly basis and it changes in one direction only.

dijksterhuis•8mo ago
i think you might have misunderstood the meaning of “producing text” in the parent comment.

the kind of people the parent comment was talking about tend to believe they can send three emails and make millions of pounds suddenly appear in business value (i’m being hyperbolic and grossly unfair but the premise is there).

they think the idea is far more valuable than the implementation - the idea is their bit (or the bit they’ve decided is their bit) and everyone else is there to make their fantastic idea magically appear out of thin air.

they aren’t looking at tests and don’t have a clue what a linter is (they probably think it’s some fancy device to keep lint off their expensive suits).

nikanj•8mo ago
Someone who was ordered by their boss to deploy it, and made sure to get the instructions in writing - with their protests also in writing.
moron4hire•8mo ago
Someone who is really pissed off at how much they have to rely on software developers to run their business. They should not have so much power and direction in the company. I mean, they don't even have memberships at the country club!
christophilus•8mo ago
Anyone who isn’t a software engineer. There is so much hype that non-technical people have bought into.

Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

mathgeek•8mo ago
> Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

There's some truth to the difference between "short term profits" and "my salary depends on this" being whether you're the boss or the employee.

red75prime•8mo ago
Anyone who doesn't fully understand current differences between existing non-deterministic, unreviewable and unprovable agents (humans) and the artificial ones.
gamblor956•8mo ago
DOGE would and did. Results were as expected... complete failure.
rurban•8mo ago
If you train it the right data, there is no security risk. It cannot know what it doesn't see. However, if you train it on internal secrets, they will leak, simple as that. Filtering will not help.

But this interview is only fear-mongering to sell expensive models. Ditching the industry leaders.

Garlef•8mo ago
Doesn't this mean: There's room for disruption/land grab?

If the big corporations can't move fast enough and 100 startups gamble on getting there, eventually one of them will be successful.

Pmop•8mo ago
And a lot of them cannot get up to speed, even when they want to. Many big corporations struggle with evolution and innovation due to crippling bureaucracy, created and supported by risk averse leadership. This is usually worse for publicly traded companies.

Unless it is something like Meta, then they have a Zuck, someone smart, with enough oversight and power, who can drain the swamp and make the whole machine move.

owebmaster•8mo ago
Zuckerberg made a genius move from the web 2.0 to the current smartphone era we still live in. But I would not be on his capability to do it again, he failed badly with metaverse and so far is failing with AI
cowboylowrez•8mo ago
hehe "drain the swamp" this guy knows how to "trump" the naysayers!
nikanj•8mo ago
A hundred startups also gamble on perpetual motion, and their arguments always come from a place of "perpetual motion would revolutionize markets and there is strong demand", never from a place of "we have figured out how to alter laws of physics and make it possible"
calebkaiser•8mo ago
Before getting too invested in any conclusions drawn from this piece, it's important to recognize this is mostly PR from Chatterbox.

From the Chatterbox site:

> Our patented AIMI platform independently validates your AI models & data, generating quantitative AI risk metrics at scale.

The article's subtitle:

> Security, not model performance, is what's stalling adoption

gsky•8mo ago
I have been using AI models to build 2 projects atm. Yes it's not perfect (30% wrong) but it solves problems so quickly and cheaply so I continue to use it going forward.

As a software engineer I want everything to be perfect but not as an entrepreneur.

add-sub-mul-div•8mo ago
Temu also solves a problem quickly and cheaply, but I wouldn't make it a big wardrobe strategy unless I was too poor to solve the problem a better way.
bsenftner•8mo ago
Chatterbox's PR money is being well spent. This article squarely places them in the center of that trillion dollar revenue stream. Marketing dollars very well spent.
stopthe•8mo ago
https://www.chatterbox.io/ "Corporate language training powered by marginalised talent" - is that satire? Did I found the wrong Chatterbox?
simonw•8mo ago
That's the wrong one. https://chatterbox.co/
sbarre•8mo ago
AI and vibe coding lets you get that rough prototype up and running so much faster than before, and so creates that illusion of momentum and completeness more than ever.

How many people here have been subjected to that "looks good, put it in production!" directive after showing off a quick POC for something? And then you have to explain how far away from being production-ready things are, etc...

There's a reason wireframing tools intentionally use messy lines, and why most UX people know better than to put brand colours in wireframes.

bowsamic•8mo ago
Prototypes are very dangerous. Our team made the mistake of having our demo look very nice even though there is still a lot of unseen work to do. Now upper management of course think “this is ready, just send it out”. Prototypes live forever, no upper manager will want you to spend time on the real thing. It is unsafe for the project to come across well
pragmatic•8mo ago
Head of an engineering program told us to always make sure the prototype has at least one glaring bug/flaw.

His background was electrical engineering but it applies doubly in software.

lofaszvanitt•8mo ago
LLMs should be trained on CEOs and middle management and of course politicians. Society would be very grateful.
kevin_thibedeau•8mo ago
They've got to finish their blockchain deployment first. Then It'll all go smoothly.
bowsamic•8mo ago
Slavoj Zizek says that the true terrifying situation is when the leaders act and know they no longer need to justify their actions. I am currently in this fight with our upper management. I ask why this push for AI, what it will do for our product, why are we making huge cuts on the scope of the project to rebrand as an AI project? All I receive is a bad confused response. Of course it’s just none of my business, they are the leaders
nyarlathotep_•8mo ago
Was on a few of these as a consultant, all major F500 companies. Most recent was a few months ago.

Every instance was some variation of RAG chat/langgraph thing. On multiple occasions, I heard "I don't see what value this has over ChatGPT", except they now had 5-6 figure cloud bills with it.

Technical users really weren't thrilled with it (i.e they wanted usable insights from their data (something best served by a db query), but ended up with LLM copypasta of internal docs) and seemed to expect significant functionality and utility on top of "regular" LLM use.

Stakeholders constantly complained (rightfully so) about issues with inaccuracy in responses, or "why is this presented in this fashion", resulting in hours of the data team folks coming up with new prompts and crossing fingers.

pragmatic•8mo ago
So right back to basic data engineering/analytics?

“Why is this dashboard showing this number?”

That’s my concern with any data “insight” magic. How do you debug what it’s telling the users?

asudhakar11•8mo ago
It should be able to tell you what assumptions it made. “Sales is $X because I assumed ARR and calendar year”. You’re then able to say great that’s what I wanted or “no I want bookings and fiscal year”.
asudhakar11•8mo ago
Why weren’t you able to show usable insights from data?