frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
1•DesoPK•53s ago•0 comments

Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•2m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
1•mfiguiere•8m ago•0 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•10m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•12m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•27m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•31m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•36m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•37m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•38m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•43m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•46m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•49m ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•56m ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•57m ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•0 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
3•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
4•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
6•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
25•SerCe•1h ago•18 comments

Octave GTM MCP Server

https://docs.octavehq.com/mcp/overview
1•connor11528•1h ago•0 comments

Show HN: Portview what's on your ports (diagnostic-first, single binary, Linux)

https://github.com/Mapika/portview
3•Mapika•1h ago•0 comments

Voyager CEO says space data center cooling problem still needs to be solved

https://www.cnbc.com/2026/02/05/amazon-amzn-q4-earnings-report-2025.html
1•belter•1h ago•0 comments
Open in hackernews

"AI discourse" is a joke

https://purplesyringa.moe/blog/ai-discourse-is-a-joke/
14•bertman•6mo ago

Comments

bn-l•6mo ago
" and hating minorities are all “bad” for the same reason, "

What does this have to do with AI?

Also, why hedge everything you're about to say with a big disclaimer?:

> Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.

pjc50•6mo ago
> What does this have to do with AI?

The author's general concern about externalization of downsides.

> Also, why hedge everything you're about to say with a big disclaimer?

Because people are extremely rude on the internet. It won't make much of a difference to actual nitpicking, as I'm sure we'll see, it's more of a sad recognition of the problem.

Expurple•6mo ago
> Also, why hedge everything you're about to say with a big disclaimer?:

Because her previous (Telegram-only) post on a similar topic has attracted a lot of unfounded negative comments that were largely vague and toxic, rather than engaging with her specific points directly and rationally.

She even mentions it later in this post (the part about “worse is better”). Have you not read that? Ironically, you're acting exactly like those people who complain without having read the post.

> What does this have to do with AI?

It's literally explained in the same sentence, right after the part that you quote. Why don't you engage more specifically with that explanation? What's unclear about it?

VMG•6mo ago
Unfortunately she is not contributing much to the discourse either. She just wants it to shift to the topics she cares about.
emsign•6mo ago
Isn't that what discourse is?
purplesyringa•6mo ago
Treat it as meta-discourse. It's not about shifting goals, it's about finding an indirect way to achieve the same goals via other topics.
Expurple•6mo ago
Technically, it's still "AI discourse". It's just about the underlying ethics, rather than best practices or the underlying tech
emsign•6mo ago
What annoys me about AI discourse are two things that in the end never seem to be considered:

1. Who's gonna pay back the investors their trillions of dollars and with what?

2. Didn't we have to start thinking about reducing energy consumption like at least a decade ago?

agoose77•6mo ago
I love this angle, and would take it further. I'm starting to think about AI in the same way that we think about food ethics.

Some people are vegan, some people eat meat. Usually, these two parties get on best when they can at least understand each-other's perspectives and demonstrate an understanding of the kinds of concerns the other might have.

When talking to people about AI, I feel much more comfortable when people acknowledge the concerns, even if they're still using AI in their day-to-day.

Expurple•6mo ago
As far as I can tell, the AGI fantasy is the usual counter-argument to this. "AI is going to make everything 10x more productive", "AI is going to invent new efficient energy for us", etc. And it's always "just around the corner", to make the problems of the current generation of LLMs seem (soon) irrelevant
khalic•6mo ago
Food for thought!

Our mental models are inadequate to think about these new tools. We bend and stretch our familiar patterns and try to slap them on these new paradigms to feel a little more secure. It’ll settle down once we spend more time with the tech.

The discourse is crawling with over generalization and tautologies because of this. This won’t do us any good.

We need to take a deep breath, observe, theorize, experiment, share our findings and cycle.

reedf1•6mo ago
> To me, overusing AI, destroying ecosystems, covering up fuck-ups, and hating minorities are all “bad” for the same reason, which I can mostly sum up as a belief that traumatizing others is “bad”. You cannot prove that AI overuse is “bad” to a person who doesn’t think in this framework, like a nihilist that treats others’ lives like a nuisance.

But you haven't defined the framework. You know a bunch of people for which AI is bad for a bunch of handwavey reasons - not from any underlying philosophical axioms. You are doing what you are accusing others of. In my ethical framework the other stated things can be shown to be bad, it is not as clear for AI.

If you want to take a principled approach you need to define _why_ AI is bad. There have been cultures and religions across time that have done this for other emerging technologies, luddites, the amish, etc. They have good ethical arguments for this - and it's possible they are right.

purplesyringa•6mo ago
It's not hard to formulate why "AI" is bad, at least in its current form. It destroys the education system, is dangerous for environment, things like deepfakes drive us further towards post-truth, it decreases product quality, AI is replacing artists and similar professions rather than technical ones without creating new jobs in the same area, it increases inequality, and so on.

Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.

reedf1•6mo ago
> The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.

OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.

TheAceOfHearts•6mo ago
This post feels a bit meandering.

One point which I consider worth making is that LLMs have helped enable a lot of people solve real world problems, even if the solutions are sometimes low quality. The reality is that in many cases the only choice is between a low quality solution and no solution at all. Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers for a high quality solution.

aziaziazi•6mo ago
> Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers

Let’s stay with the (at minimum) low quality solution: What would do someone without IA?

- ask on a forum (Facebook, Reddit, askHN, spécialises forums…)

- ask a neighbor if he knows someone knowledgeable (2 or 3 relations can lead you to many experts)

- go to the library. Time consuming but you might learn something else too and improve knowledge and IQ

- think again about the problem (ask Why? Many times, think out of the box…)

Expurple•6mo ago
Indeed, the choice is actually between:

- "a low quality solution"

- "a low-quality solution, but you also spend extra time (sometimes - other people's time) on learning to solve the problem yourself"

- "a high-quality solution, but you've spent years on becoming an expert is this domain"

It's good that you brought this up.

Often, learning to solve a class of problems is simply not a priority. Low-quality vide-coded tools are usually means to an end. And the end goals that they achieve are often not even the most important end goals that you have. Digging so deep into the details of those is not worth it. Those are temporary, ad-hoc things.

In the original post, the author references our previous discussion about "Worse Is Better". It's a very relevant topic! Over there, I actually made a very similar point about priorities. "Worse" software is "better" when it's just a component of a system where the other components are more important. You want to spend as much time as possible on those other components, and not on the current component.

A (translated) example that I gave in that thread:

> In the 1970s, K&R were doing OS research. Not PL research. When they needed to port their OS, they hacked a portable low-level language for that task. They didn't go deep into "proper" PL research that would take years. They ported their OS, and then returned straight to OS research and achieved breakthroughs in that area. As intended.

> It's very much possible that writing a general, secure-by-design instrument would take way more time than adding concrete hacks on the application level and producing a result that's just as good (secure or whatever) when you look at the end application.

Expurple•6mo ago
To be fair, the post says that "overusing AI [is] bad" and "problems [are] caused by widespread AI use" (emphasis mine).

I beleive, they aren't against all AI use and aren't against the use that you describe. They are against knowingly cutting corners and pushing the cost onto the users (when you have an option not to). Or onto anything else, be it the environment or the job market

123yawaworht456•6mo ago
the idea that we're having a discourse at all is delusion. the sheer volume of capital behind this tech is enough to roll over all the feeble cries of impotent rage.
Expurple•6mo ago
> the sheer volume of capital behind this tech is enough to roll over all the feeble cries of impotent rage.

This part is true. But at the same time, it's fairly easy to filter only real, established personal blogs and see that the same type of "practical" AI discourse (that the author dislikes) is present (and dominates) there too.

Yizahi•6mo ago
Maybe empathy is a dead end evolutionary trait and humanity will just self select people without it altogether? :) . Kinda like implied in Blindsight.
Expurple•6mo ago
It's easy to argue that this selection is happening in business and politics. But I don't see how it could be relevant to reproduction, on an evolutionary scale.

I haven't read Blindsight, though.

Yizahi•6mo ago
A hypothesis is approximately this, paraphrasing - human society rewards sociopathic humans, who filter to the top of politics, corporation control, figures of influence etc. Watts goes even further and throws in a hypothesis that even consciousness may be an artifact which humans will evolve out of. Next are my own thoughts - local short term selection is not enough to put evolutionary pressure, and in general evolution doesn't work like this. But I suspect that a lot of empathy traits and adjacent characteristics are not genetic but a product of education, with some exceptions. So if whole society (not only CEOs and presidents) starts rewarding sociopathic behavior, parents may educate kids accordingly and the loop will be self reinforcing. For example some tiny example we can see today - union busting where those exist; cheating culture where cheating is normal and encouraged; extreme competitiveness turning into deathmathes (a-la South Korea university insanity, when everyone studies much longer hours than needed, constantly escalating the situation).
Expurple•6mo ago
> I suspect that a lot of empathy traits and adjacent characteristics are not genetic but a product of education

Ah, I see. The argument makes total sense if that's the case.

I'm just not used to talking about learned behaviors in terms of "evolution"

Expurple•6mo ago
Oh, well. I guess, I'll have to translate my original Telegram comment.

---

I agree with the connections that you make in this post. I like it.

But I disagree that purely-technical discussions around LLMs are "meaningless" and "miss the point". I think, appealing to reason through "it will make your own work more pleasant and productive" (for example, if you don't try to vibecode an app that you'll need to maintain later) is an activity that has a global positive effect too.

Why? Because the industry has plenty of cargo cults that don't benefit you, even at someone else's expense! This pisses me off the most. Irrationality. Selfishness is at least something that I can understand.

I'll throw in the idea that cultivating rationatily helps cultivate a compassionate society. No matter how you look at it, most people have compassion in them. You don't even need to "activate" it. But I feel like, due to their misunderstanding of a situation, or due to logical fallacies, people's compassion often manifests as actions that only make everything worse. The problem isn't that people don't try to help the others. A lot of people try, but do that wrong :(

A simple example: most of the politically-active people with a position that's opposite to yours. (The "yours" in this example is relative and applicable to anyone, I don't mean the author specifically)

In general, you should fight the temptation to percieve people around you as (even temporarily) ill-intentioned egoists. Most of the time, that's not the case. "Giving the benefit of the doubt" is a wonderful rule of thumb. Assume ignorance and circumstances, rather than selfishness. And try to give people tools and opportunities, instead of trying to influence their moral framework.

I'll also throw in another idea. If a problem has an (ethical) selfish solution, we should choose that. Why? Because it doesn't require any sacrifices. This drastically lowers the friction. Sacrifices are a last resort. Sacrifices don't scale well. Try to think more objectively whether that's the most efficient solution to the injustice that bothers you. Sacrifices allow to put yourself on a moral pedestal, but they don't always lead to the most humane outcomes. It's not a zero-sum game.