frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
2•witnessme•1m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
1•aloukissas•4m ago•0 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•7m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•13m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
2•alephnerd•16m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•16m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•19m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•19m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•31m ago•4 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•32m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•34m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
3•duxup•36m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•38m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•50m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•52m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•53m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•54m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•58m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
41•chwtutha•1h ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments
Open in hackernews

A conservative vision for AI alignment

https://www.lesswrong.com/posts/iJzDm6h5a2CK9etYZ/a-conservative-vision-for-ai-alignment
7•flypunk•5mo ago

Comments

bediger4000•5mo ago
It must be hard for grassroots folks like these two who actually seem to believe in movement conservative principles. They got abandoned by all levels of conservative leadership. Conservative leadership did a U-turn on essentially every issue.
ConceptJunkie•5mo ago
Idealists are always abandoned by political leadership.
flypunk•5mo ago
IMO they win just by making me (and you) think of it in this way. I don't think they are looking at it from a political leadership perspective, but rather from a cultural/research angle. As a father of 3 teenagers I find their take very convincing. And off course before the kids my views on many issues were much more liberal
davidmanheim•5mo ago
Yes - I'm fairly culturally conservative, but very much don't support the (so-called) conservative political leaders basically anywhere.
bediger4000•5mo ago
Why do you say "of course"? Having kids pushed me more liberal, school funding and curriculum issues being the wedges, but seeing lots of kids and families sure helped.
cactacea•5mo ago
The article constructs a straw man of liberalism and then goes completely off the rails from there.

> Not suffering for its own sake, or trauma, but the kind of tension that arises from limits, from saying no, from sustaining hard-won compromises. In a reductionist frame, every source of pain looks like a bug to fix, any imposed limit is oppression, and any disappointment is something to try to avoid. But from a holistic view, pain often marks the edge of a meaningful boundary. The child who’s told they can’t stay out late may feel hurt. But the boundary says: someone cares enough to enforce limits. Someone thinks you are part of a system worth preserving. To be unfairly simplistic, values can’t be preserved without allowing this pain, because limits are painful.

Good lord how much meaningless slop can you spew onto one page.

armchairhacker•5mo ago
Aligning a god-like superintelligence is asking "what do you want when you can have everything?"

The Twilight Zone episode "A Nice Place to Visit"* is about a man who gets whatever wish he desires. Initially he's overjoyed, but after a month he becomes numb and miserable: with no conflict, he has no purpose (it turns out, he's in hell). In reality, a superintelligence that could grant anything could grant more: it could make people not "feel" numb and purposeless even though they have everything. But what would they "feel", would they be conscious, would they be "human"?

This is something that the article sort of addresses: perhaps there's something inherent to conflict and struggle. Also, that people often ask for things that make them sad in the long run: e.g. children asking to eat junk food and stay up late, forming bad habits that hurt them later in life. A near-godlike superintelligence could solve most modern problems (e.g. maintain people's health and sleep/wake states regardless of what/when they eat/sleep), but would those fixes create future problems it can't solve? Basically, giving people whatever they want (the article's definition of "liberalism", which has become a term with many common definitions) has consequences.

Sure, taking this reasoning too far lets you justify any suffering (because "suffering is necessary") and restriction (because "allowing would make you unhappy in the long run"). But I think even most liberals can acknowledge it's a fair consideration: at least to prevent the Twilight Zone or loss of humanity, or at least because solving problems too fast without thinking through and accomodating the solution, can create larger unsolvable problems later. See: LLMs making people stupid, promoting delusions, increasing the wealth gap, and polluting social discourse even more than now.

My stupid opinion: that's an impossible question, but it's also one we don't need to solve. What we have right now is AI that's far from superintelligent, and lots of problems, including the ones I described above. I think what we should do, and the only thing we can do right now, is keep solving problems; we should try to in ways that create the smallest second-order problems, but only avoid solving a problem if every solution is likely to create a larger second-order one.

My politics lean towards "live and let live" largely because it's practical. Restraints based on "moral" and "holistic" principles do benefit some people in the long run, but whenever they're applied on a large scale, they hurt more people. Because somebody only knows what's better for somebody else than themselves, if they're significantly more competent (in whatever category they know what's better for), and they really understand the person's values and emotions (especially what makes them happy or sad).

* https://en.wikipedia.org/wiki/A_Nice_Place_to_Visit