frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
1•gnufx•1m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•5m ago•0 comments

.72% Variance Lance

1•mav5431•6m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•7m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•7m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•8m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•10m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•10m ago•1 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
1•byandrev•11m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•11m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•12m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•12m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•14m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•15m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•16m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•16m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•21m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•21m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•22m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•23m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•23m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•23m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•24m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•25m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•26m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•27m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•29m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•31m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•32m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•33m ago•0 comments
Open in hackernews

AGI is marketed as Spearman's 'g', but architected like Guilford's model

3•jatinkk•1mo ago
I am not a tech expert and not working in the tech industry, so this is an outsider's perspective. The marketing around AGI promises Spearman’s g: a general, fluid intelligence that can adapt to new, unseen problems. But the engineering—specifically "Mixture of Experts" and distinct modules—looks exactly like J.P. Guilford’s Structure of Intellect. Guilford viewed intelligence as a collection of ~150 specific, independent abilities. The issue isn't just about how these parts are stitched together. The issue I see is: what happens when the model faces a problem that doesn't fit into one of its pre-defined parts? How will they ensure that the output doesn't look fragmented when the architecture relies on switching between specialized "experts" rather than using a unified reasoning core? A collection of specific skills (Guilford) is not the same as the ability to adapt to anything (Spearman). By optimizing for specific components, we are building a system that is great at known tasks but may fundamentally lack the fluid reasoning needed for true general intelligence. I am not anti-AI; I simply feel we might need to relook at our approach.We can't expect the right destination with the wrong highway.

Comments

o1inventor•1mo ago
From what I gather it boils down to this: Just as parameter counts increased, at a sufficient number of specialized skills, new, more general skills may emerge or be engineered.

There are already examples of this in the wild, language and vision models not just performing scientific experiments, but coming up with new hypothesis on their own, designing experiments from scratch, laying out plans on how to carry out those experiments, and then instructing human helpers to carry those experiments out, gathering data, validating or invalidating hypothesis, etc.

The open question is can we derive a process, come up with data, and train models such that they can 1. detect when some task or question is outside the training distribution, 2. and develop models capable of coming up with a process for exploring the new task or question distribution such that they (eventually) arrive at (if not a good answer), an acceptable one.

jatinkk•1mo ago
That is definitely the industry's hope—that quantity eventually becomes quality (emergence). But my concern comes from the history of the model itself. In psychology, Guilford’s "cube" of 150 specialized factors never emerged into a unified intelligence. It just remained a complex list of separate abilities. The "open question" you mention (how to handle tasks outside the training distribution) is exactly where I think the Guilford architecture hits a wall. If we build by adding specific modules, the system might never learn how to reason through the "unknown"—it just waits for a new module to be added.