frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ChromaDB Explorer

https://www.chroma-explorer.com/
1•arsentjev•42s ago•0 comments

US Debt Monitor Live

https://www.us-debt-clock.com/
1•vednig•1m ago•0 comments

Show HN: Smart Tab Suspender – Reduce Chrome memory usage by autosuspending tabs

https://chromewebstore.google.com/detail/smart-tab-suspender/mmhfonkfehekiofpkjiofjeoofloidog
1•yavuzyildirim•4m ago•0 comments

Infinite Collaborative Word Search Game

https://words.zip/
2•bookofjoe•4m ago•0 comments

Crafting Interpreters

https://craftinginterpreters.com/
3•tosh•4m ago•0 comments

OpenAI Forges Multibillion-Dollar Computing Partnership with Cerebras

https://www.wsj.com/tech/ai/openai-forges-multibillion-dollar-computing-partnership-with-cerebras...
3•rbanffy•5m ago•1 comments

Ugandans, Iranians turn to Dorsey's messaging app Bitchat in web crackdowns

https://www.reuters.com/business/media-telecom/ugandans-iranians-turn-dorseys-messaging-app-bitch...
1•_djo_•7m ago•0 comments

Analyzing my own genome with DRAGEN and Claude

https://www.dddiaz.com/post/t1d-genome-analysis-report/
1•dddiaz1•10m ago•0 comments

Training My Smartwatch to Track Intelligence

https://dmvaldman.github.io/rooklift/
1•dmvaldman•11m ago•0 comments

Scaling long-running autonomous coding

https://cursor.com/blog/scaling-agents
6•samwillis•12m ago•0 comments

The novelists who predicted our present

https://www.theguardian.com/books/2026/jan/10/mass-surveillance-the-metaverse-making-america-grea...
4•mooreds•13m ago•1 comments

The Hypocrisy over Iran

https://www.telegraph.co.uk/news/2026/01/14/silence-luvvies-iran-exposes-left-war-on-west-middle-...
4•midlander•13m ago•3 comments

Germany, Other NATO Allies Sending Troops to Greenland Amid Trump Threats

https://www.newsweek.com/greenland-germany-sending-troops-nato-donald-trump-threats-11361535
4•mooreds•14m ago•0 comments

Former NYC Mayor Eric Adams Accused of Crypto Pump and Dump with NYC Token

https://gizmodo.com/former-nyc-mayor-eric-adams-accused-of-crypto-pump-and-dump-with-nyc-token-20...
4•pseudolus•18m ago•1 comments

DoorDash and Uber Eats Cost Delivery Workers Millions of Dollars in Tips, NYC

https://gizmodo.com/doordash-and-uber-eats-cost-delivery-workers-millions-of-dollars-in-tips-nyc-...
2•pseudolus•20m ago•0 comments

Six prosecutors quit over push to investigate ICE shooting victim's widow

https://www.nytimes.com/2026/01/13/us/prosecutors-doj-resignation-ice-shooting.html
14•heavyset_go•22m ago•1 comments

Students aren't asking for help anymore. That could be a good thing

https://practicespace.substack.com/p/students-arent-asking-for-help-anymore
2•rappatic•22m ago•0 comments

Why I Use the GPL and Not Cuck Licenses

https://lukesmith.xyz/articles/why-i-use-the-gpl-and-not-cuck-licenses/
2•soygem•22m ago•0 comments

Poking holes into bytecode with peephole optimisations

https://xnacly.me/posts/2026/purple-garden-first-optimisations/
1•xnacly•22m ago•0 comments

Quantum Automated Theorem Proving

https://arxiv.org/abs/2601.07953
1•7777777phil•24m ago•0 comments

Verizon Outage

https://apnews.com/article/verizon-cellular-outage-85d658a4fb6a6175cae8981d91a809c9
2•zephyreon•24m ago•1 comments

The State of OpenSSL for pyca/cryptography

https://cryptography.io/en/latest/statements/state-of-openssl/
7•SGran•26m ago•0 comments

Show HN: Distribute AI agent test runs across your spare machines via `rr`

https://github.com/rileyhilliard/rr
1•RileyHilliard•30m ago•0 comments

Ui.dev and Fireship Join Forces

https://fireship.dev/uidotdev-and-fireship-join-forces
3•JustSkyfall•31m ago•0 comments

Germany joins European partners with troop deployment to Greenland

https://www.reuters.com/world/europe/germany-send-reconnaissance-troops-greenland-government-says...
14•consumer451•32m ago•0 comments

Our First Public Parks: The Forgotten History of Cemeteries (2011)

https://www.theatlantic.com/national/archive/2011/03/our-first-public-parks-the-forgotten-history...
1•toomuchtodo•34m ago•1 comments

Simple to Ornate and Back Again

https://josem.co/simple-to-ornate-and-back-again/
1•nikodunk•35m ago•0 comments

Show HN: quick-sync. TikTok-esque video switch using WebRTC

https://github.com/pion/webrtc/tree/master/examples/quick-switch
1•Sean-Der•36m ago•1 comments

Distributed SQL engine for ultra-wide tables

2•synsqlbythesea•37m ago•0 comments

Data centers are amazing. Everyone hates them

https://www.technologyreview.com/2026/01/14/1131253/data-centers-are-amazing-everyone-hates-them/
3•rbanffy•37m ago•3 comments
Open in hackernews

The Influentists: AI hype without proof

https://carette.xyz/posts/influentists/
113•LucidLynx•1h ago

Comments

dcre•1h ago
To me, debunking hype has always felt like arguing with an advertisement. A good read about that: https://www.liberalcurrents.com/deflating-hype-wont-save-us/
irishcoffee•21m ago
Hard to take that seriously. It’s a political hit-piece. Which I guess is most things today, but I don’t take those seriously either.

Masks during Covid and LLMs, used as political pawns. It’s kind of sad.

datsci_est_2015•1h ago
Anecdotally, I’m finding that, at least in the Spark ecosystem, AI-generated ideas and code are far from optimal. Some of this comes from misinterpreting the (sometimes poor) documentation, and some of it comes from, probably, there not being as many open source examples as CRUD apps, which AI “influentists” (to borrow from TFA) appear to often be hyping up.

This matters a lot to us because the difference in performance of our workflows can be the difference in $10/day in costs and $1000/day in costs.

Just like TFA stresses, it’s the expertise in the team that pushes back against poor AI-generated ideas and code that is keeping our business within reach of cash flow positive. ~”Surely this isn’t the right way to do this?”

jimbo808•56m ago
Most text worth paying for (code, contracts, research) requires:

- accountability

- reliability

- validation

- security

- liability

Humans can reliably produce text with all of these features. LLMs can reliably produce text with none of them.

If it doesn't have all of these, it could still be worth paying for if it's novel and entertaining. IMO, LLMs can't really do that either.

doug_durham•1h ago
I never read the tweet as anything other than that an expert with deep knowledge of their domain was able to produce a PoC. Which I still find to be very exciting and worthy of being promoted. This article didn't really debunk much.
chasd00•26m ago
> expert with deep knowledge of their domain

these are the kinds of people that can use generative AI best IMO. Deep domain knowledge is needed to spot when the model output is wrong even though it sounds 100% correct. I've seen people take a model's output as correct to a shocking degree like placing large bets at a horse track after uploading a pic of the schedule to ChatGPT. Many people believe whatever a computer tells them but, in their defense, no one has had to question a large calculation done by a calculator until now.

sleekest•1h ago
I agree, if the benefits are so large, there should be clearer evidence (that isn't, "trust me, just use it").

That said, I use Antigravity with great success for self hosted software. I should publish it.

Why haven't I?

* The software is pretty specific to my requirements.

* Antigravity did the vast amount of work, it feels unworthy?

* I don't really want a project, but that shouldn't really stop me pushing to a public repo.

* I'm a bit hesitant to "out" myself?

Nonetheless, even though I'm not the person, I'm surprised there isn't more evidence out there.

nirolo•46m ago
I think this "* The software is pretty specific to my requirements." is the biggest part for me. I built something with Antigravity over the holidays that I can use for myself and it solves my use case. I tried thinking about if this can be helpful for others and pushed it a bit further into a version that could be hosted. Which does not make that much sense because it is a computationally intense numerical solver for thermal bridges and just awfully slow on a free hosted platform. But the project was a couple of evenings and would otherwise haven taken me half a year to complete (and thus never been done).

https://github.com/schoenenbach/thermal-bridge https://thermal-bridge.streamlit.app/

pizzathyme•58m ago
My anxiety about falling behind with AI plummeted after I realized many of these tweets are overblown in this way. I use AI every day, how is everyone getting more spectacular results than me? Turns out: they exaggerate.

Here are several real stories I dug into:

"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed

"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs

"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers

"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF

browningstreet•53m ago
Getting viral on X is the current replacement for selling courses for {daytrading,amazon FBA,crypto}.

The content of the tweets isn't the thing.. bull-posting or invoking Cunningham's Law is. X is the destination for formula posting and some of those blue checkmarks are getting "reach" rev share kickbacks.

giancarlostoro•20m ago
Yeah, if you get enough impressions, you get some revenue, so you don't need to sell any courses, just viral content. Which is why some (not ALL) exaggerate as suggested.
cadamsdotcom•48m ago
People say outrageous things when they’re follower farming.
deadbabe•43m ago
“I used AI to make a super profitable stock trading bot” —-> using fake money with historical data

“I used AI to make an entire NES emulator in an afternoon!” —-> a project that has been done hundreds of times and posted all over github with plenty of references

Fazebooking•34m ago
I vibe coded a few ideas i had in my mind for a while. My basic stack is html, single page, local storage and lightweight js.

It is really good in doing this.

those ideas are like UI experiments or small tools helping me doing stuff.

Its also super great in ELI5'ing anything

cmdtab•27m ago
Pretty much every x non-political/celeb account with 5K followers+ is a paid influencer shill lol.

Welcome to the internet

lostmsu•22m ago
"I used AI to write a GPU-only MoE forward and backward pass to supplement the manual implementation in PyTorch that only supported a few specific GPUs" -> https://github.com/lostmsu/grouped_mm_bf16 100% vibe coded.
giancarlostoro•21m ago
> "I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF

There was a story years ago about someone who made hundreds of novels on Amazon, in aggregate they pulled in a decent penny. I wonder if someone's doing the same but with ChatGPT instead.

bogwog•16m ago
Afaik, I think the way people are making money in this space is selling courses that teach you how to sell mass produced AI slop on Amazon, rather than actually doing it
Legend2440•55m ago
Idk man, all AI discussion feels like a waste of effort.

“yes it will”, “no it won’t” - nobody really knows, it's just a bunch of extremely opinionated people rehashing the same tired arguments across 800 comments per thread.

There’s no point in talking about it anymore, just wait to see how it all turns out.

asadotzler•23m ago
It's not "yes it is" vs "no it won't" though. The discussion is "Yes it does" vs "no it doesn't" (present tense.) There's nothing wrong with guessing about the future, but lying about a present that is testable and unwillingness to submit to the testing is wrong.
asdff•11m ago
Even then nothing is learned. Every HN thread there is on AI coding: "I am using $model for writing software and its great." "I am using $model for writing software and it sucks and will never do it." 800 comments of that tit for tat in present tense. Still nothing learned.

Doesn't help that no one talks about exactly what they are doing and exactly how they are doing it, because capitalism vs open technology discussions meant to uplift the species.

kinduff•52m ago
I think humans are proxying their value through what they can do with AI. It's like a domestication flex.
minimaxir•50m ago
There are two major reasons people don't show proof about the impact of agentic coding:

1) The prompts/pipelines portain to proprietary IP that may or may not be allowed to be shown publically.

2) The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process and open the people up to dunking.

For example in the case of #2, I recently published the prompts I used to create a terminal MIDI mixer (https://github.com/minimaxir/miditui/blob/main/agent_notes/P...) in the interest of transparency, but those prompts correctly indicate that I barely had an idea how MIDI mixing works and in hindsight I was surprised I didn't get harrassed for it. Given the contentious climate, I'm uncertain how often I will be open-sourcing my prompts going forward.

jacquesm•48m ago
You weren't harassed for it because (1) it is interesting and (2) you were not hiding the AI involvement and passing it off as your own.

The results (for me) are very much hit-and-miss and I still see it as a means of last resort rather than a reliable tool that I know the up and downsides of. There is a pretty good chance you'll be wasting your time and every now and then it really moves the needle. It is examples like yours that actually help to properly place the tool amongst the other options.

deng•46m ago
No. The main reasons are that

1) the code AI produces is full of problems, and if you show it, people will make fun of you, or

2) if you actually run the code as a service people can use, you'll immediately get hacked by people to prove that the code is full of problems.

tptacek•32m ago
You should go hack the Cloudflare Workers OAuth stuff then, right?
deng•8m ago
You seem to think I'm an AI coding hater or something. I'm not. I think these tools are incredibly useful and I use them daily. However, like described in the article, I do am skeptical about stories where AI writes whole applications, SaaS or game engines in a few hours and everything "just works". That is not my experience. The Cloudflare people also wrote that they carefully reviewed everything.
Fazebooking•31m ago
1) no one cares if it works. No one cared before how your code looked as long as you are not a known and well used opensource project.

2) there are plenty of services which do not require state or login and can't be hacked. So still plenty of use cases you can explore. But yes i do agree that Security for production live things are still the biggest worry. But lets be honest, if you do not have a real security person on your team, the shit outthere is not secure anyway. Small companies do not know how to build securely.

dugidugout•27m ago
How are both of these not simply the second case they provided?
nojito•44m ago
Or 3 it’s my competitive advantage to keep my successes close to my chest.
minimaxir•42m ago
That's 1, just reworded.
tobr•42m ago
Could you clarify that last paragraph for me? I’m not sure what ”contentious climate” is here. AI antihype? I don’t understand the connection to not being harassed for something, isn’t that a good thing rather than something that would make you uncertain if you want to share prompts in the future?
minimaxir•29m ago
"AI tech bro creates slop X because they don't understand how X actually works" is a common trope among the anti-AI crowd even on Hacker News that has only been increasing in recent months, and sharing prompts/pipelines provides strong evidence that can be pointed at for dunks. Sharing AI workflows is more likely to illicit this snark if the project breaks out of the AI bubble, though in the case of the AI boosters on X described as in the HN submission that's a feature due to how monetization works that platform. It's not something I want to encourage for my own projects, though.

There's also the lessons on the recent shitstorms in the gaming industry, with Sandfall about Expedition 33's use of GenAI and Larian's comments on GenAI with concept art, where both received massive backlash because they were transparent in interviews about how GenAI was (inconsequentially) used. The most likely consequence of those incidents is that game developers are less likely on their development pipelines.

habinero•12m ago
Counterpoint: If the tech was actually that good, nobody could dunk on it and anyone who tried would be mocked back.

If your hand is good, throw it down and let the haters weep. If you scared to show your cards, you don't have a good hand and you're bluffing.

Wowfunhappy•21m ago
I'm fundamentally a hobbyist programmer, so I would have no problem sharing my process.

However, I'm not nearly organized enough to save all my prompts! I've tried to do it a few times for my own reference. The thing is, when I use Claude Code, I do a lot of:

- Going back and revising a part of the conversation and trying again—sometimes reverting the code changes, sometimes not.

- Stopping Claude partway through a change so I can make manual edits before I let Claude continue.

- Jumping between entirely different conversation histories with different context.

And so on. I could meticulously document every action, but I find it gets in the way of experimentation. It's not entirely different from trying to write down every intermediate change you make in your code editor (between actual VCS commits). I guess I could record my screen, but (A) I promise you don't actually want to watch me fiddle with Claude for hours and (B) it would make me too self-conscious.

It would be very cool to have a tool that goes through Claude's logs and exports some kind of timeline in a human-readable format, but I would need it to be automated.

---

Also, if you can't tell from the above, my use of Claude is very far from "type a prompt, get a finished program." I do a lot of work in order to get useful output. I happen to really enjoy coding this way, and I've gotten great results, but it's not like I'm typing a prompt and then taking a nap.

Hoasi•16m ago
> The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process

You nailed it. Prompting is dull and self evident. Sure, you need basic skills to formulate a request. But it’s not a science and has nothing to do with engineering.

Wowfunhappy•9m ago
It's really quite frustrating how much "prompting" seems to be random luck. Every time I think I've learned something about what makes for a good/bad prompt, I find it doesn't carry over into other situations.

A roulette wheel can still be useful if enough of the outcomes are wins, but that doesn't make it something you can get better at...

Edit: I don't find it "dull" though, possibly because I like writing. And, I suppose there is some skill to being able to describe what you want precisely enough for the AI to (hopefully) follow your instructions.

caditinpiscinam•42m ago
Doesn't the existence of consumer products like ChatGPT indicate that LLMs aren't able to do human-level work? If OpenAI really had a digital workforce with the capabilities of ~100k programmers/scientists/writers/lawyers/doctors etc, wouldn't the most profitable move be to utilize those "workers" directly, rather that renting out their skills piecemeal?
bluGill•31m ago
That depends on what the real value is. The sure way to get rich selling pickaxes to gold miners. However you would be even richer if you figured out where the gold really was and mined in that exact location.'

Of course you can also get rich selling scams.

HellDunkel•36m ago
This is a strage phenomenon where people get excited by the mere fact that someone else is excited by something which is not directly visible to the spectator. It works well in horror movies and as it seems with AI hype.
Fazebooking•36m ago
Its still not a Hype, its still crazy what is possible today and we still have no clear at all if this progress continues as it does or not with the implication, that if it continues, it has major implications.

My wife, who has no clue about coding at all, chatgpted a very basic android app only with guidance of chatgpt. She would never ever been able to do this in 5 hours or so without my guidance. I DID NOT HELP HER at all.

I'm 'vibecoding' stuff small stuff for sure, non critical things for sure but lets be honest, i'm transforming a handfull of sentences and requirements into real working code, today.

Gemini 3 and Claude Opus 4.5 def feel better than their prevous versions.

Do they still fail? Yeah for sure but thats not the point.

The industry continues to progress on every single aspect of this: Tooling like claude CLI, Gemini CLI, Intellij integration, etc., Context length, compute, inferencing time, quality, depth of thinking etc. there is no current plateau visible at all.

And its not just LLMs, its the whole ecosystem of Machine Learning stuff: Highhly efficient weather model from google, Alpha fold, AlphaZero, Roboticsmovement, Environment detection, Image segmentation, ...

And the power of claude for example, you will only get with learning how to use it. Like telling it your coding style, your expectations regarding tests etc. We often assume, that an LLM should just be the magic work collegue 10x programmer but its everything an dnothing. If you don't communicate well enough it is not helpful.

And LLMs are not just good in coding, its great in reformulating emails, analysing error messages, writing basic SVG files, explaining kubernetes cluster status, being a friend for some people (see character.ai), explaining research paper, finding research, summarizing text, the list is way to long.

Alone 2026 there will go so many new datacenters live which will add so much more compute again, that the research will continue to be faster and more efficient.

There is also no current bubble to burst, Google fights against Microsoft, Antrophic and co. while on a global level USA competets with China and the EU on this technology. The richest companies on the planet are investing in this tech and they did not do this with bitcoins because they understod that bitcoin is stupid. But AI is not stupid.

Or Machine learing is not stupid.

Do not underestimate the current status of AI tools we have, do not underestimate the speed, continues progress and potential exponential growth of this.

My timespan expecation for obvious advancments in AI is 5-15 years. Experts in this field predict already 2027/2030.

But to iterate over this: a few years ago no one would have had a good idea how we could transform basic text into complex code in such a robust way, which such diverse input (different language, missing specs, ...) . No one. Even 'just generating a website'.

beepbooptheory•13m ago
Even if we assume for a moment everything you are saying is true and/or reasonable, can't you see how comments like these paint your position here in a bad light? It just reads a little desperate!
shermantanktop•29m ago
I’ve taken to calling this (in my mind) the Age of the Sycophants. In politics, in corporate life, in technology and in social media, many people are building a public life around saying things that others want to hear, with demonstrably zero relationship to truth or even credibility.
DotaFan•25m ago
I think this "trend" is due to AI companies paying (in some form) the influencers to promote AI. Simple as that.
kfarr•24m ago
Like everything in LLM land it's all about the prompt and agent pipeline. As others say below, these people are experts in their domain. Their prompts are essentially a form of codifying their own knowledge, as in Rakyll and Galen's examples, to achieve specific outcomes based on years and maybe even decades of work in the problem domain. It's no surprise their outputs when ingested by an LLM are useful, but it might not tell us much about the true native capability of a given AI system.
keyle•18m ago
Great article. This needs to be framed. The whole trust me bro, and shock and awe of social medias is a non-stop assault these days. You can't open a wall without seeing those promoted up front and centre and without any proof.

If AI was so good today, why isn't there an explosion of successful products? All we see is these half baked "zomg so good bro!" examples that are technically impressive, but decisively incomplete or really, proof of concepts.

I'm not saying LLMs aren't useful, but they're currently completely misrepresented.

Hype sells clicks, not value. But, whatever floats the investors' boat...

tossandthrow•15m ago
Influences generally don't get to me.

Sitting 2 hours with an Ai agent developing end to end products does.

ankit219•14m ago
Its a strange phenomenon. You want to call out the bs but then you are just giving them engagement and boost. You want to stay away but there is a sort of confluence where these guys tend to ride on each others' post and boosts those posts anyway. If you ask questions, very rarely they answer, and if they do, it takes one question to unearth that it was the prompt or the skill. Eg: huggingface people post about claude finetuning models. how? when they gave everything in a skill file, and claude knew what scripts to write. Tinker is trying the same strategy. (yes, its impressive that claude could finetune, but not as impressive as the original claim that made me pay attention to the post)

It does not matter if they get the details wrong, its just that it needs to be vague enough, and exciting enough. Infact vagueness and not sharing the code part signals they are doing something important or they are 'in the know' which they cannot share. The incentives are totally inverted.

arjie•13m ago
If you don't get the results you don't get the results. If someone else can use this tool to get the results, they'll out-compete you. If they can't, then they've wasted time and you'll out-compete them. I see these influencer guys as idea-generators. It's super-cheap to test out some of these theories: e.g. how well Claude can do 3D modeling was an idea I wanted to test and I did and it's pretty good; I wanted to test Claude as a debugging aid and it's a huge help for me.

But I would never sit down to convince a person who is not a friend. If someone wanted me to do that, I'd expect to charge them for it. So the guys who are doing it for free are either peddling bullshit or they have some other unspecified objective and no one likes that.

fasouto•12m ago
The article nails the pattern but I think it's fundamnetally an incentives problem.

We're drowning in tweets, posts, news... (way more than anyone can reasonably consume). So what rises to the top? The most dramatic, attention-grabbing claims. "I built in 1 hour what took a team months" gets 10k retweets. "I used AI to speed up a well-scoped prototype after weeks of architectural thinking" gets...crickets

Social platforms are optimized for engagement, not accuracy. The clarification thread will always get a fraction of the reach of the original hype. And the people posting know this.

The frustrating part is there's no easy fix. Calling it out (like this article does) get almost no attention. And the nuanced followup never catches up with the viral tweet.

fuefhafj•6m ago
A recent favorite of mine is simonw who usually is unable to stop hyping LLMs suddenly forgetting they exist in order to rhetorically "win" an argument:

> If you're confident that you know how to securely configure and use Wireguard across multiple devices then great

https://news.ycombinator.com/item?id=46581183

What happened to your overconfidence in LLMs ability to help people without previous experience do something they were unable to before?