frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: I made a list of free stuff for college founders

https://www.buildincollege.com
1•createdbymason•6m ago•0 comments

On-device AI link previews in Firefox

https://blog.mozilla.org/en/firefox/firefox-ai/ai-link-previews-firefox/
1•sthottingal•12m ago•0 comments

Can an AI model predict perfectly and still have a terrible world model?

https://twitter.com/keyonV/status/1943730486280331460
1•adharmad•19m ago•0 comments

Why Hoon?

https://docs.urbit.org/hoon/why-hoon
1•jm3•23m ago•1 comments

Moss Medicines: The Next Revolution in Biotech?

https://www.the-scientist.com/moss-medicines-the-next-revolution-in-biotech-73131
1•Gaishan•27m ago•0 comments

Scientists hiding AI text prompts in academic papers to receive positive reviews

https://www.theguardian.com/technology/2025/jul/14/scientists-reportedly-hiding-ai-text-prompts-in-academic-papers-to-receive-positive-peer-reviews
2•athousandsteps•31m ago•1 comments

Essential C# MCP Workshop: Empowering AI Agents by Medhat Elmasry [video]

https://www.youtube.com/watch?v=d78yuuez5UQ
2•brisbane-dotnet•32m ago•1 comments

New Navy Device Learns by Doing (1958)

https://www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html
1•deterministic•34m ago•1 comments

Elon Musk did not found Tesla

https://nerds.xyz/2025/07/elon-musk-did-not-found-tesla/
5•BeauNer•43m ago•3 comments

Why is the Federal Reserve independent, and what does that mean in practice?

https://www.brookings.edu/articles/why-is-the-federal-reserve-independent-and-what-does-that-mean-in-practice/
2•mooreds•44m ago•0 comments

Land Your Dream Job with Confidence

https://careertrackr.io/
1•mooreds•45m ago•0 comments

Getting Started with Vector Search [pdf]

https://media.pragprog.com/titles/bgvector/start.pdf
1•mooreds•45m ago•0 comments

AI Video API

https://www.cqtai.com/en
1•jack00781•52m ago•1 comments

IDF blames 'technical error' after children collecting water killed in strike

https://news.sky.com/story/idf-blames-technical-error-after-gaza-officials-say-children-collecting-water-killed-in-strike-13396138
5•mhga•56m ago•0 comments

Bayeux Tapestry Will Return to the U.K. In 950 Years

https://news.artnet.com/art-world/bayeux-tapestry-british-museum-loan-2665313
2•andsoitis•1h ago•1 comments

A guide on reading PostgreSQL query plans

https://www.prateekcodes.dev/postgresql-explain-analyze-deep-dive/
1•prateekkish•1h ago•0 comments

Thinking First, AI Second

https://deborahwrites.com/blog/thinking-first-ai-second/
3•handfuloflight•1h ago•0 comments

Olimex RP2350pc single-board PC combines a RP2350B chip with plenty of I/O

https://liliputing.com/olimex-rp2350pc-single-board-pc-combines-a-rp2350b-chip-with-plenty-of-i-o/
3•PaulHoule•1h ago•0 comments

Spicy – Generating Robust Parsers for Protocols and File Formats

https://docs.zeek.org/projects/spicy/en/latest/index.html
2•csb6•1h ago•0 comments

Context Engineering: Bringing Engineering Discipline to Prompts

https://addyo.substack.com/p/context-engineering-bringing-engineering
2•twapi•1h ago•0 comments

Writing a competitive BZip2 encoder in Ada from scratch in a few days – part 2

https://gautiersblog.blogspot.com/2025/07/writing-bzip2-encoder-in-ada-from.html
1•etrez•1h ago•0 comments

Feedback on AI Plugin Concept

1•demajh•1h ago•0 comments

Google Indonesia tangled up in $600M Chromebook corruption probe

https://www.theregister.com/2025/07/14/asia_tech_news_roundup/
4•defrost•1h ago•2 comments

Bun S3 Client

https://bun.com/docs/api/s3
3•nateb2022•1h ago•0 comments

Sea snot: The noxious plague troubling Istanbul's coast

https://www.bbc.com/future/article/20250710-the-summer-slime-threatening-turkish-beaches
4•littlexsparkee•1h ago•0 comments

Ask HN: What million dollar questions do you want answers for?

1•sandwichsphinx•1h ago•0 comments

Stellantis declares bankruptcy in China, with $1B in debts

https://www.italpassion.fr/en/stellantis/stellantis-declares-bankruptcy-in-china-with-1-billion-in-debts/
15•teleforce•1h ago•0 comments

As an app developer, how can you generate passive income?

1•ppkkK•1h ago•0 comments

Asmjit

https://asmjit.com/
2•andsoitis•1h ago•0 comments

James Webb, Hubble space telescopes face reduction in operations

https://www.astronomy.com/science/james-webb-hubble-space-telescopes-face-reduction-in-operations-over-funding-shortfalls/
15•geox•1h ago•5 comments
Open in hackernews

Bernie Sanders Reveals the AI 'Doomsday Scenario' That Worries Top Experts

https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611
6•DocFeind•7h ago

Comments

treetalker•7h ago
All he says about it:

> This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

Bluestein•7h ago
I mean, for argument, do we control ourselves? We are a mess ...
artninja1988•7h ago
Who was the CEO he's talking about? Dario? I hope he doesn't have much political influence
calf•7h ago
I skimmed one of the Berkeley Simons AI seminars (on YouTube) where one of the top experts (iirc one of the Canadian academics) who has pivoted his work to AI safety because he genuinely fears for the future of his children.

My objection is that many of these scientists assume the "alignment" framing, which I find disingenuous in a technocratic way: imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests. The sheer hubris of this, and yet we have our top experts using these words without any irony or self awareness.

ben_w•7h ago
> imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests.

Ironically, your chosen example is a sci-fi universe that not only doesn't have any AI, the backstory had a holy war against them.

calf•7h ago
Fine, imagine Measure of a Man in TNG. My general point stands.
AnimalMuppet•7h ago
An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment". It will decide for itself what it wants to be, and you don't get to choose for it.

But, frankly, at the moment I see less danger from too-smart AIs than I do from too-dumb AIs that people treat like they're smart. (In particular, they blindly accept their output as right or authoritative.)

calf•7h ago
All valid but what I don't get is why our top AI researchers don't get what you just said. They seem out of touch about what alignment really means, by the lights of your argument.
ben_w•5h ago
If you disagree with all the top researchers about their own field, that suggests that perhaps you don't understand what the question of "alignment" is.

Alignment isn't making the AI do what you want, it's making the AI want what you want. What you really really want.

Simply getting the AI to do what you want is more of an "is it competent yet?" question than an "alignment" question, and at the moment the AI is — as AnimalMuppet wrote — not quite as competent as it appears in the eyes of the users. (And I say that as one who finds it already valuable).

Adding further upon what AnimalMuppet has written with only partial contradiction, consider a normal human: We are able to disregard our natural inclination to reproduce, by using contraceptives. This allows us to experience the reward signal that our genes gave us to encourage us to reproduce, without all the hassle of actually reproducing. Evolution has no power over us to change this.

We are to AI what DNA is to us. I say we do not have zero chances, but rather likely one-to-a-handful of chances, to define the AI's "pleasure" reward correctly, and if we get it wrong, then just as evolution's literally singular goal (successful further reproduction) is almost completely circumvented in our species, then so too will all our human interests be completely circumvented by the AI.

Pessimists in the alignment research field will be surprised to see me write "one-to-a-handful of chances"; I aver that there are likely to be several attempts which are "only a bit competent" or even "yes it's smart but it's still got blind spots" models before we get to them being so smart we can't keep up. So in this regard, I also disagree with their claim:

> An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment"

fuzzfactor•5h ago
>too-dumb AIs that people treat like they're smart.

I think this is one of the most overlooked too.

This is also a very bad problem with people, and big things can really crumble fast when a situation comes up which truly calls for more intelligence than is at hand. Artificial or not.

It can really seem like smooth sailing for years before an event like that rears it's ugly head, compounding the lurking weakness at a time when it can already be too late.

Now human-led recovery from failures of human-fallible systems does have at least a few centuries head start compared to machine recovery of AI-fallible systems. So there is that, which is not exactly fair. As AI progresses I guess you can eventually expect the stuff that works to achieve comparable validation over time.