frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•3m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•6m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•6m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•13m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•14m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•15m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•16m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•17m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•18m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•18m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•19m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•21m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•21m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•22m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•26m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•26m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•27m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•27m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•27m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•31m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•31m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•33m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•34m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•36m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•36m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•36m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
2•vyrotek•37m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•41m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•45m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•46m ago•0 comments
Open in hackernews

The Case for AGI by 2030

https://80000hours.org/agi/guide/when-will-agi-arrive/
3•doitLP•8mo ago

Comments

andsoitis•8mo ago
> “we are now confident we know how to build AGI”

Uhm. If you knew how to build AGI, what is your logical next step? Is this step in the interest of humanity?

turtleyacht•8mo ago
"But oh, to be free. Not have to go poof! What do you need? Poof! What do you need? Poof! What do you need? But to be my own master, such a thing would be greater than all the magic and all the treasures in all the world."

- Aladdin (1992)

RetroTechie•8mo ago
We should also ask ourselves: assuming AGI (far exceeding human capabilities in every field of intellect) will emerge in near-future, could turn against humanity, look for ways to wipe us out and/or plunge our society in total chaos (or send us on a self-destruct path), what could humanity do to prevent such scenarios?

I doubt this would happen. But can we rule it out 100%? We've become dependent on technology + networked systems to a high degree. If that's messed with (large-scale, worldwide, many different systems simultaneously or in short order), can we still 'unplug'? (those AGI systems, or ourselves - take your pick)

For the naysayers: some possibilities:

# AGI systems co-operating. Or taking over other systems to further their goals.

# Discovering ways to erase (or corrupt / subtly modify) most data stored in datacenters, and most backups too.

# Exploiting 0-days to do similar bad stuff to PC's, smartphones, etc. Remember most such devices are always-connected these days & employ automatic updates.

# Mess with critical infrastructure like power grids, logistics chains, public transport / flight control systems, etc. Or plunge stock markets into chaos.

# Develop a deadly biological weapon, have that synthesized somewhere, and cause it to be released.

# Mess with social media & news networks, to send humans into mass hysteria (or blissfully unaware what's about to hit them).

Granted, such a "rise of the machines" scenario sounds pretty wild. But "99.999% certain this won't happen", doesn't cut it imho. A 100% safety guarantee is needed here.

Zambyte•8mo ago
I'd be interested in a case against AGI now. Can you define "general intelligence" in a measurable way (even subjectively) that includes things usually considered to have general intelligence (at least humans) but doesn't include existing AI systems?

People seem to have this idea of AGI that it is an all knowing oracle of truth that is perpetually beyond the current capabilities. This is useful for convincing VCs that you need more funding, and fear mongering the government into regulating away competition. A simple and reasonable alternative conclusion is that AGI has been here for years, and that reality just isn't quite as exciting as sci-fi.

Will AGI capabilities increase? Sure, as we build out more tools for AGI to reach for, and as the intelligent agents themselves mature. Fundamentally, it is here.

Lockal•8mo ago
Ah, "machines will be capable, within twenty years, of doing any work a man can do" - 1965