frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
350•nar001•3h ago•174 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
86•bookofjoe•1h ago•78 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
410•theblazehen•2d ago•151 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
76•AlexeyBrin•4h ago•15 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
10•thelok•1h ago•0 comments

First Proof

https://arxiv.org/abs/2602.05192
31•samasblack•1h ago•18 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
767•klaussilveira•19h ago•240 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
49•onurkanbkrc•4h ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
24•vinhnx•2h ago•3 comments

Show HN: I'm 15 and built a free tool for reading ancient texts.

https://the-lexicon-project.netlify.app/
5•breadwithjam•32m ago•2 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1019•xnx•1d ago•580 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
154•alainrk•4h ago•187 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
155•jesperordrup•9h ago•56 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
6•marklit•5d ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
9•mellosouls•2h ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
15•rbanffy•4d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
100•videotopia•4d ago•26 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
7•simonw•1h ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•41 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
260•isitcontent•19h ago•33 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
34•matt_d•4d ago•9 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
273•dmpetrov•19h ago•145 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
15•sandGorgon•2d ago•3 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
98•tartoran•1h ago•22 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
544•todsacerdoti•1d ago•262 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
415•ostacke•1d ago•108 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
361•vecti•21h ago•161 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
61•helloplanets•4d ago•63 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
331•eljojo•22h ago•204 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
455•lstoll•1d ago•298 comments
Open in hackernews

"AI discourse" is a joke

https://purplesyringa.moe/blog/ai-discourse-is-a-joke/
14•bertman•6mo ago

Comments

bn-l•6mo ago
" and hating minorities are all “bad” for the same reason, "

What does this have to do with AI?

Also, why hedge everything you're about to say with a big disclaimer?:

> Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.

pjc50•6mo ago
> What does this have to do with AI?

The author's general concern about externalization of downsides.

> Also, why hedge everything you're about to say with a big disclaimer?

Because people are extremely rude on the internet. It won't make much of a difference to actual nitpicking, as I'm sure we'll see, it's more of a sad recognition of the problem.

Expurple•6mo ago
> Also, why hedge everything you're about to say with a big disclaimer?:

Because her previous (Telegram-only) post on a similar topic has attracted a lot of unfounded negative comments that were largely vague and toxic, rather than engaging with her specific points directly and rationally.

She even mentions it later in this post (the part about “worse is better”). Have you not read that? Ironically, you're acting exactly like those people who complain without having read the post.

> What does this have to do with AI?

It's literally explained in the same sentence, right after the part that you quote. Why don't you engage more specifically with that explanation? What's unclear about it?

VMG•6mo ago
Unfortunately she is not contributing much to the discourse either. She just wants it to shift to the topics she cares about.
emsign•6mo ago
Isn't that what discourse is?
purplesyringa•6mo ago
Treat it as meta-discourse. It's not about shifting goals, it's about finding an indirect way to achieve the same goals via other topics.
Expurple•6mo ago
Technically, it's still "AI discourse". It's just about the underlying ethics, rather than best practices or the underlying tech
emsign•6mo ago
What annoys me about AI discourse are two things that in the end never seem to be considered:

1. Who's gonna pay back the investors their trillions of dollars and with what?

2. Didn't we have to start thinking about reducing energy consumption like at least a decade ago?

agoose77•6mo ago
I love this angle, and would take it further. I'm starting to think about AI in the same way that we think about food ethics.

Some people are vegan, some people eat meat. Usually, these two parties get on best when they can at least understand each-other's perspectives and demonstrate an understanding of the kinds of concerns the other might have.

When talking to people about AI, I feel much more comfortable when people acknowledge the concerns, even if they're still using AI in their day-to-day.

Expurple•6mo ago
As far as I can tell, the AGI fantasy is the usual counter-argument to this. "AI is going to make everything 10x more productive", "AI is going to invent new efficient energy for us", etc. And it's always "just around the corner", to make the problems of the current generation of LLMs seem (soon) irrelevant
khalic•6mo ago
Food for thought!

Our mental models are inadequate to think about these new tools. We bend and stretch our familiar patterns and try to slap them on these new paradigms to feel a little more secure. It’ll settle down once we spend more time with the tech.

The discourse is crawling with over generalization and tautologies because of this. This won’t do us any good.

We need to take a deep breath, observe, theorize, experiment, share our findings and cycle.

reedf1•6mo ago
> To me, overusing AI, destroying ecosystems, covering up fuck-ups, and hating minorities are all “bad” for the same reason, which I can mostly sum up as a belief that traumatizing others is “bad”. You cannot prove that AI overuse is “bad” to a person who doesn’t think in this framework, like a nihilist that treats others’ lives like a nuisance.

But you haven't defined the framework. You know a bunch of people for which AI is bad for a bunch of handwavey reasons - not from any underlying philosophical axioms. You are doing what you are accusing others of. In my ethical framework the other stated things can be shown to be bad, it is not as clear for AI.

If you want to take a principled approach you need to define _why_ AI is bad. There have been cultures and religions across time that have done this for other emerging technologies, luddites, the amish, etc. They have good ethical arguments for this - and it's possible they are right.

purplesyringa•6mo ago
It's not hard to formulate why "AI" is bad, at least in its current form. It destroys the education system, is dangerous for environment, things like deepfakes drive us further towards post-truth, it decreases product quality, AI is replacing artists and similar professions rather than technical ones without creating new jobs in the same area, it increases inequality, and so on.

Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.

reedf1•6mo ago
> The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.

OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.

TheAceOfHearts•6mo ago
This post feels a bit meandering.

One point which I consider worth making is that LLMs have helped enable a lot of people solve real world problems, even if the solutions are sometimes low quality. The reality is that in many cases the only choice is between a low quality solution and no solution at all. Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers for a high quality solution.

aziaziazi•6mo ago
> Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers

Let’s stay with the (at minimum) low quality solution: What would do someone without IA?

- ask on a forum (Facebook, Reddit, askHN, spécialises forums…)

- ask a neighbor if he knows someone knowledgeable (2 or 3 relations can lead you to many experts)

- go to the library. Time consuming but you might learn something else too and improve knowledge and IQ

- think again about the problem (ask Why? Many times, think out of the box…)

Expurple•6mo ago
Indeed, the choice is actually between:

- "a low quality solution"

- "a low-quality solution, but you also spend extra time (sometimes - other people's time) on learning to solve the problem yourself"

- "a high-quality solution, but you've spent years on becoming an expert is this domain"

It's good that you brought this up.

Often, learning to solve a class of problems is simply not a priority. Low-quality vide-coded tools are usually means to an end. And the end goals that they achieve are often not even the most important end goals that you have. Digging so deep into the details of those is not worth it. Those are temporary, ad-hoc things.

In the original post, the author references our previous discussion about "Worse Is Better". It's a very relevant topic! Over there, I actually made a very similar point about priorities. "Worse" software is "better" when it's just a component of a system where the other components are more important. You want to spend as much time as possible on those other components, and not on the current component.

A (translated) example that I gave in that thread:

> In the 1970s, K&R were doing OS research. Not PL research. When they needed to port their OS, they hacked a portable low-level language for that task. They didn't go deep into "proper" PL research that would take years. They ported their OS, and then returned straight to OS research and achieved breakthroughs in that area. As intended.

> It's very much possible that writing a general, secure-by-design instrument would take way more time than adding concrete hacks on the application level and producing a result that's just as good (secure or whatever) when you look at the end application.

Expurple•6mo ago
To be fair, the post says that "overusing AI [is] bad" and "problems [are] caused by widespread AI use" (emphasis mine).

I beleive, they aren't against all AI use and aren't against the use that you describe. They are against knowingly cutting corners and pushing the cost onto the users (when you have an option not to). Or onto anything else, be it the environment or the job market

123yawaworht456•6mo ago
the idea that we're having a discourse at all is delusion. the sheer volume of capital behind this tech is enough to roll over all the feeble cries of impotent rage.
Expurple•6mo ago
> the sheer volume of capital behind this tech is enough to roll over all the feeble cries of impotent rage.

This part is true. But at the same time, it's fairly easy to filter only real, established personal blogs and see that the same type of "practical" AI discourse (that the author dislikes) is present (and dominates) there too.

Yizahi•6mo ago
Maybe empathy is a dead end evolutionary trait and humanity will just self select people without it altogether? :) . Kinda like implied in Blindsight.
Expurple•6mo ago
It's easy to argue that this selection is happening in business and politics. But I don't see how it could be relevant to reproduction, on an evolutionary scale.

I haven't read Blindsight, though.

Yizahi•6mo ago
A hypothesis is approximately this, paraphrasing - human society rewards sociopathic humans, who filter to the top of politics, corporation control, figures of influence etc. Watts goes even further and throws in a hypothesis that even consciousness may be an artifact which humans will evolve out of. Next are my own thoughts - local short term selection is not enough to put evolutionary pressure, and in general evolution doesn't work like this. But I suspect that a lot of empathy traits and adjacent characteristics are not genetic but a product of education, with some exceptions. So if whole society (not only CEOs and presidents) starts rewarding sociopathic behavior, parents may educate kids accordingly and the loop will be self reinforcing. For example some tiny example we can see today - union busting where those exist; cheating culture where cheating is normal and encouraged; extreme competitiveness turning into deathmathes (a-la South Korea university insanity, when everyone studies much longer hours than needed, constantly escalating the situation).
Expurple•6mo ago
> I suspect that a lot of empathy traits and adjacent characteristics are not genetic but a product of education

Ah, I see. The argument makes total sense if that's the case.

I'm just not used to talking about learned behaviors in terms of "evolution"

Expurple•6mo ago
Oh, well. I guess, I'll have to translate my original Telegram comment.

---

I agree with the connections that you make in this post. I like it.

But I disagree that purely-technical discussions around LLMs are "meaningless" and "miss the point". I think, appealing to reason through "it will make your own work more pleasant and productive" (for example, if you don't try to vibecode an app that you'll need to maintain later) is an activity that has a global positive effect too.

Why? Because the industry has plenty of cargo cults that don't benefit you, even at someone else's expense! This pisses me off the most. Irrationality. Selfishness is at least something that I can understand.

I'll throw in the idea that cultivating rationatily helps cultivate a compassionate society. No matter how you look at it, most people have compassion in them. You don't even need to "activate" it. But I feel like, due to their misunderstanding of a situation, or due to logical fallacies, people's compassion often manifests as actions that only make everything worse. The problem isn't that people don't try to help the others. A lot of people try, but do that wrong :(

A simple example: most of the politically-active people with a position that's opposite to yours. (The "yours" in this example is relative and applicable to anyone, I don't mean the author specifically)

In general, you should fight the temptation to percieve people around you as (even temporarily) ill-intentioned egoists. Most of the time, that's not the case. "Giving the benefit of the doubt" is a wonderful rule of thumb. Assume ignorance and circumstances, rather than selfishness. And try to give people tools and opportunities, instead of trying to influence their moral framework.

I'll also throw in another idea. If a problem has an (ethical) selfish solution, we should choose that. Why? Because it doesn't require any sacrifices. This drastically lowers the friction. Sacrifices are a last resort. Sacrifices don't scale well. Try to think more objectively whether that's the most efficient solution to the injustice that bothers you. Sacrifices allow to put yourself on a moral pedestal, but they don't always lead to the most humane outcomes. It's not a zero-sum game.