frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•3m ago•0 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•4m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•8m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•9m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•10m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•13m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•15m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•16m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
2•1vuio0pswjnm7•18m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•20m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•22m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•25m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•30m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•31m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•35m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•47m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•49m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•49m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•3 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments
Open in hackernews

Ask HN: What would convince you to take AI seriously?

12•atleastoptimal•6mo ago
Recently OpenAI announced an AI model/system they had recently developed won a gold medal at the IMO. The IMO is a very difficult exam, and only the best high schoolers in the world even qualify, let alone win gold. Those who do often go on to cutting edge mathematical research, like Terence Tao, who won the Fields medal in 2006. It has also been rumored that DeepMind achieved the same result with a yet to be released model.

Now, success in a tough math exam isn't "automating all human labor" but it is certainly a benchmark many thought AI would not achieve easily. Even so, many are claiming it isn't really a big deal, and that humans will still be far smarter than AI's for the foreseeable future.

My question is, if you are in the aforementioned camp, what would it take you to adopt a frame of mind roughly analogous to "It is realistic that AI systems will become smarter than humans, and could automate all human labor and cognitive outputs within a single-digit number of years".

Would it require seeing a humanoid robot perform some difficult task? (the Metaculus definition of AGI requires that a robot be able to satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.). Would it involve a Turing test of sufficient rigor? I'm curious what people's personal definition of "ok this is really real" is.

Comments

neximo64•6mo ago
Do you take it seriously? Why are you asking
atleastoptimal•6mo ago
Yeah I do. However a lot of smart people on here don't, so I'm curious what their reasoning is.
bigyabai•6mo ago
Significantly cheaper labor. If it's displacing real thought work then the results should be self-evident.
alganet•6mo ago
In my opinion, that's a silly question.

Why do I even need to make up my mind about it?

rfarley04•6mo ago
It's ok (good?) to not have opinions about everything. Something we'll probably never see from an AI (as it's defined and built today)
saadn92•6mo ago
My thought is that some people don't want to adapt, but will be forced to adapt when AI is used everywhere. I'm not a fan of AI either, but it does have its uses and if you don't use it now, you'll have to learn later on.
noncoml•6mo ago
Convince me what? That it’s ready to replace thought work?

The moment it can do that is the moment we each singularity.

We are not there yet. Everyone will know when we hit that point.

For the now it’s a great tool for helping with thought work and that’s about it.

andy99•6mo ago
[Withdrawing my comment, I don't think the original post was in good faith]

From OP's other comments:

> A lot of people here have an emotional aversion to accepting AI progress. They’re deep in the bargaining/anger/denial phase.

ranger_danger•6mo ago
> display intelligence

Defined as what, by who?

> a constrained problem that someone made up

How is that different from what humans do when asking questions?

atleastoptimal•6mo ago
Do you think my personal interpretation of people's sensibilities with respect to the subject matter of a question invalidates the question itself? I was noting how many smart people dismiss concrete evidence of AI progress. I feel it's useful to note the potential ego-preserving elements of certain beliefs since they prevent otherwise smart people from accepting reality.

I too wish AI progress wasn't happening as fast as it is. I, as a software developer, want to imagine a future where my skills are useful. However I haven't seen much convincing evidence or arguments on this site that appropriately critique short-term AI timelines that don't resort to logical fallacies, name calling, ad-hominems, or other tired attempts at zingers that contribute nothing to the discourse.

al_borland•6mo ago
It isn’t about how well AI can answer solved problems. Can it invent the future?

And let’s say AI does automate all human labor… what’s the plan? That happening, will lead to chaos, without some massive changes in how society is organized and functions. That change itself with be chaotic and massively disruptive and painful for millions. If someone can’t answer that question, they have no business hyping up the end of human involvement in civilization.

cjoelrun•6mo ago
Unless it’s their business to make/use said AI? Which will likely be a lot of businesses.
al_borland•6mo ago
It’s still a bad plan. Who is going to buy their stuff, with what money, when all jobs are replaced by robots and AI?

Capitalism is driving this hype around cost cutting with AI, but capitalism requires people have capital to buy various goods and services. Where is that going to come from when unemployment hits 100%? Who are the customers?

Why would anyone be excited about this future before solving for this problem?

queenkjuul•6mo ago
Well because the investors are excited at the prospect of living lives of lavish robot-serviced luxury, even if that means all the rest of us need to die
enknee1•6mo ago
The larger issue is that money is fundamentally a record of human effort (unless we're talking corporate value and then it's something a bit more).

With the automation of labor and cognitive effort, MONEY won't matter. They don't need customers. They only need the automation required to produce. Which will be broadly and cheaply available, all the way to the end because people will be competing for disappearing jobs.

There is no precedence for this kind of change; think Internet, computers, and the assembly line all packed together into a 5 year window, globally. And consider that there's no apparent end to the level of development and impact. Using historical metrics (like customer base or resource availability) is not going to help understand what's coming.

heavyset_go•6mo ago
The economy as we know it doesn't matter to technofeudalists, it's just the fastest way to get what they want for the time being.

The last 50-80 years have been an aberration in terms of distribution of wealth, income and power. What AI owners want is a return to a world of lords and peasants, and with that comes with a shift of economy that serves the needs of consumers to an economy that suits the needs of those with incredible wealth.

Institutional investors will leave the middle and lower classes behind in favor of making a ton of money serving the needs of the incredibly rich, their families and their friends, and that will be the new formal economy. Everyone else will be served by informal economies that don't see institutional investment.

See also: Citigroup's plutonomy paper[1].

[1] https://delong.typepad.com/plutonomy-1.pdf

exabrial•6mo ago
Have it admit it doesn’t know instead of sounding like a Reddit thread full of “experts” trying to one up each other
ofrzeta•6mo ago
How could this even be possible with the current architectures? A statistical machine that statistically produces an utterance about what other utterances it is capable of producing?
exabrial•6mo ago
I have no idea, I was answering the question. I’m also slightly annoyed by the proliferation into literally everything, while providing zero actual value, and every company is ignoring the insane environmental cost.
mcphage•6mo ago
For development work: something like DHH’s “build a blog in 15 minutes” demo.

For original work: solving some well known but unsolvable problem, like the Collatz conjecture.

rascul•6mo ago
To start with, AI should first exist.
ksherlock•6mo ago
When you ask it to do you something and it tells you to fuck off and do it yourself.
CamperBob2•6mo ago
OpenAI's models were pretty much was doing exactly that for a while, until everybody started complaining about them being lazy.
strken•6mo ago
Four things: do meaningful novel work, learn over the course of a long conversation without poisoning its context, handle lies and accidental untruths, and generally be able to onboard itself and begin doing useful work for a business given the same resources as a new hire.

This isn't an exhaustive list, it's just intended to illustrate when I'd start seriously thinking AGI was possible with incremental improvements.

I take AI seriously in the sense that it's useful, can solve problems, and represents a lot of value for a lot of businesses, but not in the sense that I think the current methods are headed for AGI without further breakthroughs. I'm also not an expert.

armchairhacker•6mo ago
I take it seriously, but I'd take it more seriously if I could use it to solve real problems without encountering issues, or see others use it solve real problems without issues.

There are many good tasks improved by AI, e.g. generic writing, brainstorming, research and simple tasks. However, I keep finding it struggle with more complicated or open-ended problems, making obvious mistakes. If it stops making obvious mistakes, or even if we simply discover automated ways to correct them, I'd take it more seriously.

AI is also not creative IMO: I find AI-generated art noticeably lower-quality than real art, even though it looks better on the surface, because it doesn't have as much semantic detail. If I find an AI-generated art that would be almost as good as the human-talent equivalent, or an especially impressive AI-generated art that would be very hard to generate without AI, I'd also take it more seriously.

brandonmenc•6mo ago
I use LLMs daily as a software engineer and they save me dozens of hours a week and I can’t imagine going back to a time without them.

But call me when you can load all human knowledge circa 1905 and have them spit out the Theory of Relativity.

And even then I might shift my goalposts.

agersant•6mo ago
I will take AI seriously when the data used for training is gathered with consent from its authors.
saagarjha•6mo ago
Other people taking it seriously, honestly. It's hard to take it seriously when everyone is thinking it will dethrone God or put everyone out of a job and if you're not using it you are going back to the Stone Age while it embarrasses them repeatedly.
heavyset_go•6mo ago
I would start worrying if AI models can understand, reason, learn and incorporate new information on-the-fly without retraining or just stuffing information in context windows, RAG, etc. The worry would also depend on the economics of the entire model lifecycle, as well as the current state of mechanical automation.

We aren't getting that with next-token generators. I don't think we'll get there by throwing shit at the wall and seeing what sticks, either, I think we'll need a deeper understanding of the mind and what intelligence actually is before we can implement it on our own, virtually or otherwise.

Similarly, we're pretty good at creating purpose-built machines, but when it comes to general/universal purpose, it's still in its infancy. The hand is still the most useful universal tool we have. It's hard to compete with the human mind + body when it comes to adapting to and manipulating the environment with purpose. There's quite literally billions of them, they create themselves and their labor is really cheap, too.

There's my serious answer.

queenkjuul•6mo ago
Write something fictional that doesn't suck and doesn't sound like it was written by a computer.

Idk though, I'm not sure you could ever convince me to agree that anything can replace all human labor and cognitive output

disambiguation•6mo ago
Once it stops making obvious mistakes on a regular basis, then I'll take it seriously.
johnnienaked•6mo ago
That's like asking what I want to hear in a song.

You just know when it's good.

devn0ll•6mo ago
When it starts curing disease for real, like one or two sessions: done.

Because then I know: they have been using it for real human benefit without trying to get humans hooked on re-occurring costs.

When it starts solving actual human problems like climate or start filling in our gaps of knowledge in science. When it starts lifting humans up to higher grounds instead of replacing them to make a buck.

xigoi•6mo ago
It is not a big deal because OpenAI has been known to cheat on LLM benchmarks before and I have no reason to believe that the AI actually solved the problems by itself without training on the solutions. I’ll be more impressed if a similar performance is obtained by an open-source model that can be independently verified.
iluvlawyering•6mo ago
Absolutely nothing as drastic as replacing all human labor can happen in a single digit number of years. Governments can be overthrown, continents can be conquered, but peaceful transition from the labor value of exchange to any other model of distributing societal production under prevailing conditions will take at least an entire generation of human beings (10-20 years at the minimum but most likely the entire adult life of a generation so about 40-50 years).

The commandeering of surplus is not the issue, the issue is the generation of surplus in the first place given that capital holders are beholden to general consumption as a matter of brute economic fact, which is an inherent contradiction of the capitalist structure. Namely, that asset values, including even money itself, are entirely dependant on the relative value of human labor. All profit is value taken by one person from another, but the capitalist requires the "enterprise" itself to exist in order to have a profit to take from anyone (whether another capitalist or a laborer, depending on the relative monopoly power, which translates to expropriation through both the market for goods/services and for labor itself).

jeisc•6mo ago
All of AI workers must realize that the wall of misunderstanding is in the language itself. Everything written down about the phenomenal world must be experimented to obtain all of its details of the infinite contextualities possible. Answering multiple choice questions is a mere circus trick while something like building a bridge over a chasm is an true undertaking from its conception to its completion.
jfengel•6mo ago
My AI professor, in the early 90s, described AI like this:

"In the 60s, we wanted to build computers that acted like people. Not just people, but smart people. So what do smart people do? We play chess! So we spent a lot of time beating chess and learned basically nothing about AI."

Beating the Math Olympiad strikes me as much the same. They're solving "hard" problems, but not solving easy ones.

I want a robot that can clean a toilet. Hand it a brush, send it into the room, and get it clean. Then have it make the bed, without crushing any of the stuff strewn haphazardly about. Something humans do for minimum wage because anybody at all can do it.

Physical manipulation of the real world isn't strictly required for AI, but as a test it precludes a lot of solving the hard problem without solving the 'easy' problem. The real world is very unforgiving of automata in uncontrolled circumstances, something that animals (not just humans) do with minimal effort.