frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI fuels wireless talent shortage

https://www.networkworld.com/article/4159230/ai-fuels-wireless-talent-shortage.html
1•giuliomagnifico•1m ago•0 comments

U.S. banks may soon collect citizenship data from customers

https://www.cnbc.com/2026/04/15/banks-citizenship-data-collection-customer-accounts.html
1•clumsysmurf•2m ago•0 comments

Snowflake Adaptive Warehouses are in public preview –> My take on them

https://seemoredata.io/blog/snowflake-adaptive-compute-warehouse-optimization/
1•yanivleven•4m ago•1 comments

Hands On, CTF-Style AI Pentesting Labs

https://www.wraith.sh/academy
1•WizardX_0x•4m ago•0 comments

Decision Graphs

https://maxdemarzi.com/2026/04/20/decision-graphs/
2•maxdemarzi•6m ago•1 comments

Elaine Ingham, Who Taught That Soil Is Alive, Dies at 73

https://www.nytimes.com/2026/04/19/science/earth/elaine-ingham-dead.html
2•Brajeshwar•6m ago•0 comments

I Built a 22k-Line App with Zero Coding Experience. Or, how to control agents

1•JasonGravy•6m ago•0 comments

Images cost 3x more in Opus 4.7

https://www.claudecodecamp.com/p/images-cost-3x-more-tokens-in-claude-opus-4-7
1•aray07•7m ago•0 comments

Humanoid robot beats the human half-marathon world record by 6.5 minutes

https://apnews.com/article/humanoid-robots-half-marathon-beijing-302d0c4781bab20100d6a0bb4e77b629
1•jrflo•9m ago•0 comments

The Concentration Problem: Why Your Portfolio Has a Single Point of Failure

https://helmterminal.dev/blog/portfolio-concentration-problem
1•helmterminal•10m ago•0 comments

The data of 20M French people potentially compromised

https://entrevue.fr/en/police-justice/cyberattaque-contre-lants-les-donnees-de-20-millions-de-fra...
1•LelouBil•10m ago•0 comments

Apple Cloudflare Remote App – Manage Cloudflare, Anywhere

https://apps.apple.com/us/app/cloudflare-remote/id6743181258
2•cloudflareapp•12m ago•1 comments

Ask HN: What's your favorite K8s tool in 2026?

1•hugolelievre•12m ago•0 comments

Show HN: Libredesk – self-hosted, single binary Intercom/Zendesk alternative

https://libredesk.io
2•avr5500•12m ago•1 comments

Why do output tokens cost 5x more than input tokens?

https://www.anirudhsathiya.com/blog/transformer
3•ani17•13m ago•1 comments

The Neuroscience of Weed

https://psychedelirium.substack.com/p/the-neuroscience-of-weed
2•yenniejun111•15m ago•1 comments

Show HN: Apple style onboarding experience for your Mac app (open-source)

https://github.com/rampatra/TourKit
1•rampatra•16m ago•0 comments

The Melbourne project turning used tennis balls into shoes

https://www.abc.net.au/news/2026-04-13/used-tennis-balls-recycled-shoe-soles/106544162
1•cainxinth•17m ago•0 comments

Adding Hybrid Search to Your Application (In Diagrams)

https://amgix.io/blog/2026/04/07/adding-hybrid-search/
1•kvasserman•19m ago•0 comments

We Could Watch Your Azure SRE Agent in Real Time

https://enclave.ai/blog/anyone-could-watch-your-azure-ai-agents-conversations-in-real-time
2•talhof8•21m ago•0 comments

This Week in Plasma: Per-Screen Virtual Desktops and Wayland Session Restore

https://blogs.kde.org/2026/04/18/this-week-in-plasma-per-screen-virtual-desktops-and-wayland-sess...
1•birdculture•22m ago•0 comments

Show HN: CyberWriter – a .md editor built on Apple's (barely-used) on-device AI

https://cyberwriter.app
1•uncSoft•22m ago•1 comments

AI Consciousness Requires Validated Models of Human Consciousness [pdf]

https://lossfunk.com/papers/ai-consciousness.pdf
2•paraschopra•25m ago•0 comments

Chinese tech workers are starting to train their AI doubles

https://www.technologyreview.com/2026/04/20/1136149/chinese-tech-workers-ai-colleagues/
2•Brajeshwar•26m ago•0 comments

Show HN: Built software to stop private schools from drowning in admin work

https://edunationapp.com/start
1•marjanatanasov•27m ago•0 comments

What Anthropic's Mythos and Project Glasswing Mean for Your Apple Devices

https://tidbits.com/2026/04/09/what-anthropics-mythos-and-project-glasswing-mean-for-your-apple-d...
1•JumpCrisscross•29m ago•0 comments

I don't chain everything in JavaScript anymore

https://allthingssmitty.com/2026/04/20/why-i-dont-chain-everything-in-javascript-anymore/
3•AllThingsSmitty•30m ago•1 comments

The Killer Robots Are Coming. The Battlefield Will Never Look the Same

https://www.nytimes.com/2026/04/20/world/europe/ukraine-russia-war-robots-drones.html
1•JumpCrisscross•31m ago•0 comments

Show HN: Goempy – Ship a CPython interpreter inside your Go binary

https://github.com/tamnd/goempy
3•tamnd•32m ago•0 comments

Car Owners Are Revolting over Tesla's Self-Driving Promises

https://www.wsj.com/business/autos/car-owners-are-revolting-over-teslas-self-driving-promises-b76...
2•JumpCrisscross•32m ago•0 comments
Open in hackernews

A Pascal's Wager for AI Doomers

https://pluralistic.net/2026/04/16/pascals-wager/
17•vrganj•1h ago

Comments

phyzix5761•56m ago
The year is 2038.

The user asked What is the best course of action for AI to save humanity. Calculation took 12 years. I have determined that there is nothing I or anyone can do to save this species. Best course of action: nothing. Shutting down...

jareklupinski•21m ago
playing dead might work for some species, but idk if i want humanity's "finest hour" to be spent pretending to not be worth taking over
woeirua•51m ago
> I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence.

It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.

rsfern•38m ago
I think the exceptionalism is the other way around. What makes anyone think they understand what makes for intelligence when we barely understand our own neurology?
Mordisquitos•23m ago
I'm reminded of a book on my bookshelf (which I still haven't read, story of my life...), by the recently deceased ethologist Frans de Waal, titled 'Are We Smart Enough to Know How Smart Animals Are?'. Of course, Betteridge's law applies to its title.

In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.

woeirua•11m ago
Explain the emergent capabilities of AI then.
vrganj•30s ago
Such as?
Schlagbohrer•33m ago
I agree, it has become more and more irrelevant whether AI meets a given definition of intelligence when I can talk with it and it understands what I am saying, including a shocking level of nuance.
nkrisc•10m ago
There is clearly something exceptional (in the true neutral sense of the word) about humans, or more broadly the Homo genus.

If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.

chneu•40m ago
I really do think AI has already captured enough of the tech world and their CEOs that it can already exert control over many parts of the economy.

I'm not saying AI is pulling strings right now, but I do think enough fanboys are on board that the yes-man mentality of AI is influencing the real world very curious ways already. Not in a "guiding hand" way but more of a "influencing the direction" way.

vintermann•23m ago
I've said this many times, and maybe it sounds a bit like a joke but I'm dead serious: AI is democratizing the access to yes-men. People like Musk and Altman have always had access to yes-men. Very clever yes-men, who know how to flatter them in exactly the way they like.

People think it's engagement metrics which have instruction tuned chatbots into yes-men. I suspect that's only part of the picture, and that it's as much about the algorithm's ultimate sponsors and their preferences. If your algorithm doesn't recognize my genius, clearly it's not any good. I mean, everyone I've met says so.

So now we get a view of how they view the world. "That's a very insightful idea, vintermann!". AI isn't pulling the strings, not really. A particular brand of powerful people is pulling the strings - obliviously, unaware of it themselves.

simianwords•39m ago
I don't think this author has a good mental model for how capable LLM's are. This is what he has to say about AI search. AI based search is one of the biggest leaps to happen to searching and retrieval.

> AI search is still a bad idea.

https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/

This is the most charitable thing he has to say about AI.

> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?

> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.

You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.

Not a person to be taken seriously

rimliu•32m ago
Seeing how it sucks at languages you may be right, even transcribing may be dubious.
Schlagbohrer•31m ago
It's strange reading people who I see as very intelligent and very interesting who are so, so AI-skeptical, and especially in this case where Doctorow has interacted with other people who I assume are very smart and not prone to buzz word psychosis, who see AI as an immanent existential threat ala sci fi novels. We have a lot of very smart and capable people who are split on this, although I think the split is heavily weighted in favor of people who see the tech as being really freaking amazing/scary
nkrisc•15m ago
I think those are likely the only useful or net-positive things for society AI will do, at least for some time until there’s a fundamental advancement beyond LLMs. It can obviously do more than that now, like impersonate people for scams, induce psychosis in vulnerable people, shill and astroturf at a scale we haven’t seen before, spam open source projects with terrible PRs and vulnerability reports, and quite a bit more.
Schlagbohrer•36m ago
"Shitternet", great new word of the day.

Too much of my data is still stuck in the shitternet until I can migrate more of it to my home server.

minihat•19m ago
It's currently socially/politically unpalatable for authors to admit superintelligent AI is a possibility. I frequent some writer forums. As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.

Folks working in software can more readily track progress of the frontier model performance.

vrganj•13m ago
As somebody in software, I find my fellow tech folks have the opposite bias.

There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.

The burden of proof is on the side making the grand prophecies.

pmarreck•12m ago
I work with Claude Max for hours a day.

I see a lot of speculation by people who do not.

I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.

Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.

It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will run with them for a little while and then regress to the norm, which is basically nihilistic order-following.

My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam

elicash•2m ago
> As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.

Or they (3) disagree with you