frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
1•layer8•36s ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•2m ago•0 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•2m ago•1 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•4m ago•0 comments

Shannon: Claude Code for Pen Testing

https://github.com/KeygraphHQ/shannon
1•hendler•4m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
1•Bender•9m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•9m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•10m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•10m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•11m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•11m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•12m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•12m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•14m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•14m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•17m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•19m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•19m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•21m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•24m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•27m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•27m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•28m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•29m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•30m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•33m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•33m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•38m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•38m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•39m ago•0 comments
Open in hackernews

99% of AI Startups Will Be Dead by 2026 – Here's Why

https://skooloflife.medium.com/99-of-ai-startups-will-be-dead-by-2026-heres-why-bfc974edd968
26•georgehill•8mo ago

Comments

yoouareperfect•8mo ago
Openai owns the intelligence until it doesnt, and the open source model is good enough
dvfjsdhgfv•8mo ago
> And they’re charging 50–100/month to do what anyone could replicate for pennies. It’s not just overpriced — it’s dishonest. The entire business model relies on the user not knowing how simple it really is.

But this is general SaaS model. Wrap thing that are being done by lower level software such as FFmpeg and expose them in a nice GUI ready for use by people who are not technical.

So what can change in the example above is the amount of markup going down, not the SaaS service going away entirely.

isoprophlex•8mo ago
If vibe coding works well enough, maybe the entire saas industry can be disrupted out of existence.

We'll replace bland, uninspired, rent-seeking but convenient ffmpeg-as-a-service SaaS tools at a 100x markup by automatically generated, vibe coded tools you can let an AI produce and host in some centralized cloud location at a 10x markup. Thus advancing the inexorable process of disintermediation by technology, turning everyone into a consumer and cutting out middle men everywhere.

Until it's only you and Jensen Huang sitting on top of a pile of your cash, shitting out NVIDIA cards like the sandworms in Dune shit out Spice.

skywhopper•8mo ago
Nah, I expect vibe-coding tooling will soon enough be directing users to use “partner” services and away from free tools. The non-techie users won’t know ffmpeg exists, they’ll just know about the video-oriented SaaS subscriptions the chatbot suggests when they ask.
jopsen•8mo ago
Yeah, you can also make the argument many saas things are just postgres with some templates :)
isoprophlex•8mo ago
Rant incoming. I know it's bad form to critique anything but the content... but I wish the story wasn't padded with those bland GenAI eyesores of an image. It's a dumb kneejerk reaction I observe in myself, but the presence of generated graphics anywhere immediately turn me off.

GenAI padded blog post? Guess your content isn't interesting enough. GenAI album cover? Artist must be equally lazy at making music. GenAI graphics on some flyer someone hands me? Please, could have just slapped nothing but text on there & let your content, whatever it is, do the talking.

I know it's there to "make things pop" or whatever but I'm so put off by the ubiquitous blandness, the samey high contrasts, subtle artifacts... Milking peoples' attention is the new smoking, or at least it should be, IMO. Especially if it's done in the most aggravatingly bland style, that of the GenAI image generator.

skywhopper•8mo ago
Pretty clear to me the article text itself was largely LLM generated as well. Incredibly repetitive and built on the same basic points over and over. List-heavy. There is a good, if not particularly insightful, article idea here, but this is a very poor version of it.
glimshe•8mo ago
Most of the negative reactions to GenAI graphical content is for images used "as is". I've seen artists using GenAI content who process, compose and enhance what comes out of the AI for truly striking results.

We'll soon have artists whose skills will be more similar to editors than content generators. People who will be good at selecting the good parts of AI content while cutting out the bad ones.

conartist6•8mo ago
Still turns me off. If you can't do art, stop fronting like you can.

Art is about having something to say. If your concern in writing is style over content, that says to me your goal is to hack my brain not help me think

dingnuts•8mo ago
> Please, could have just slapped nothing but text on there & let your content, whatever it is, do the talking.

Maybe this explains some of the success of brat by Charli xcx last summer

mosura•8mo ago
What does this guy look at on Instagram to get a feed like that?

It sounds like he sees what he does because that is all he looks for.

skywhopper•8mo ago
Yeah, folks don’t realize they are telling on themselves when they complain about the content of their Insta or TikTok feeds.
louthy•8mo ago
Exactly, my feed on Instagram is nothing but boxer dogs.

I hate instagram, but love boxer dogs. So when I’m forced to use Instagram I make sure I click on nothing else than boxer dog videos and pictures.

It’s remarkable how quickly the algorithm switches to your preferences. If you engage it will come back to you 10 fold.

sitzkrieg•8mo ago
sure, but sometimes it will randomly flood you with some garbage topic. maybe its an escape hatch for low engagement accounts but everytime i (accidentally) see the search or reels list its usually some concerted normie theme
christina97•8mo ago
We might well be at the cusp of a huge bubble caused by investor hubris, but this article hasn’t convinced me.

The difference between the stated podcast app and the dot com bubble is that one is making serious revenue at almost 100% profit, whereas one did not even have a revenue model.

Also I think everyone knows at this point that foundation models are a commodity and not a particularly profitable business.

dvfjsdhgfv•8mo ago
> Wrappers rely on OpenAI. OpenAI relies on Microsoft. Microsoft needs NVIDIA. NVIDIA owns the chips that power it all

So this is the model that investors see. The reality is quite different. People and orgs are not stupid and want to avoid vendor lock-in.

So in reality:

* Wrapper don't only rely on OpenAI. In fact, in order to be competitive, they have to avoid OpenAI because it's terribly expensive. If they can get away with other models, the savings can be enormous as some of these can be 10x cheaper.

* Local models are a thing. You don't need proprietary models and API calls at all for certain uses. And these models get better and better each year.

* Nvidia is still the dominant player and this won't change in the next years but AMD is really making huge progress here. I don't mention TPUs as they seem to be much Google-specific.

* Microsoft is not in any special position here - I was implementing OpenAI API integrations with various API gateways and it's by no means something related to Azure only.

* OpenAI's business model is based on faith at this moment. This was debated ad nauseam so it makes no sense to repeat all arguments here but the fact is that they used to be the only one game in town, then the leader, and now are neither, but still claim to be.

delichon•8mo ago
> Local models are a thing. You don't need proprietary models and API calls at all for certain uses. And these models get better and better each year.

They are getting better so fast that I'm considering building a business that depends on much lower cost LLM inference. So betting years of effort on it.

But the bet is also that the proprietary models won't run away with faster improvements that make local models uncompetitive even while they improve. Can the local models keep up? They seem to be closing the gap now. Is that the rule or an artifact of the early development phase?

The safer plan may be to pass the inference cost through to the user and let them pick premium or budget models according to their need almost per request, as Zed editor does now.

largbae•8mo ago
It might not matter that proprietary models stay ahead of local, as long as the local models are strong enough for your use case.
delichon•8mo ago
The use case is structuring arbitrary natural language, e.g. triple extraction. That seems to benefit from as much context and intelligence as can be applied. "Good enough" remains a case by case judgment.
hackingonempty•8mo ago
Outside of giant tech companies, there are many researchers with access to little more than a single consumer GPU card. They are highly motivated to reduce the cost of training and inference.
dvfjsdhgfv•8mo ago
> The safer plan may be to pass the inference cost through to the user and let them pick premium or budget models according to their need almost per request, as Zed editor does now.

I'm working on a solution right now that is using a local/cheap model first, does some validation, and if this validation fails, use the expensive SOTA model. This is the most reasonable approach if you have a way to verify the results somehow (which might not be easy depending on the use case).

stranded22•8mo ago
Whilst I liked some of the article, I got very bored with the structure and after about half way, I skimmmed it.

If you are going to write about AI companies going to be extinct next year, could you please write it without the use of AI? It turned very formulaic.

And the fact it was calling something a scam because it was packaged up? That’s the same as anything that’s packaged - may as well buy 6 apples and take home to wash/cut rather than the prepackaged/cut ones.

Some thought provoking ideas though - spoiling by the link to get early access to a local AI

insane_dreamer•8mo ago
Not gonna sign up for a Medium account just to read this