frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
1•momciloo•38s ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•39s ago•1 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
1•valyala•40s ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•1m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•1m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•1m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•4m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•4m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
1•valyala•6m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•7m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•8m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
3•randycupertino•10m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•12m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
1•adammfrank•12m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•14m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•14m ago•0 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•14m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•16m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•17m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•18m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•22m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•23m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•24m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•24m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
1•xeouz•26m ago•1 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•27m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
2•nicholascarolan•29m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•29m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•29m ago•2 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•30m ago•0 comments
Open in hackernews

We built an Artificial Brain that forms memories, generate original thoughts

https://github.com/10111two/Primite-1.02
6•10111two•5mo ago

Comments

10111two•5mo ago
At JN Research, we are exploring a third path between mainstream traditional AI and descriptive neuroscience. Instead of scaling or optimizing trained function approximators, we build Adaptrons; artificial neurons that behave like biological neurons (subthreshold + graded + Action Potential) and autonomously adapt internally and with other Adaptrons in a system. On this substrate, our small artificial brain Primite 1.02 (500 Adaptrons) now shows: • Original thoughts (novel outputs not seen as stimuli). • Memory formation and consolidation (short/intermediate/long-term). • Anticipation: outputs that appear before the corresponding stimulus is presented. We ran 8 independent experiments with different genetic parameters and share detailed counts, timing, and example outputs. This is not ML training; it’s a principles-first cognitive substrate where higher functions emerge from the interaction rules. Furthermore, we also show that higher cognitive functions do not need bigger models or scale to emerge, we can see their early signs if the fundamental framework allows for it. If you are curious (or skeptical), we have included the full technical report and a data repo with outputs for verification, plus our prior 1.02 report on original thought and memory. Github Repository: https://github.com/10111two/primite-1.02
10111two•5mo ago
Few Anticipatory Questions • “Isn’t this just ML/randomness?” - No training or gradient descent is used. Only neuron-like rules (graded, subthreshold, action potentials). Outputs are logged and timestamped; anyone can verify them. • “How do you define ‘original thought’?” - An output is “original” if it was never presented as a stimulus during that system’s lifetime, yet emerges autonomously. • “What about controls?” - We ran multiple experiments with different genetic parameters; each yielded different system behaviors. One run was deliberately configured as a pure input/output machine, confirming that adaptability is essential for higher functions. • “Independent replication?” – We are open to live demos (reviewers choose inputs) and will provide full raw outputs. Under NDA, reviewers can also set genetic parameters and observe the system’s lifetime behavior. • “Why 500 Adaptrons?” – Our approach is milestone-driven: we demonstrate emergence at small scales first (memory, anticipation), then scale gradually (20k, multimodal, 1M).
reify•5mo ago
I always find it fascinating how the use of words (spin) can take something, like a pretty little box of electrical impulses, manipulated with a few lines of code, into something amazing called a brain.

Given enough time I could make this little box into the next god, with my extremely intellectual choice of words, even though there is no evidence that the little box can even think, feel or experience life as we know it.

Thinking, in the human sense, now there is a real thing.

10111two•4mo ago
The skepticism from your end is very understandable. But human brain is made of approximately 86 billion of those pretty little boxes (neurons) of electrical impulses, all governed by same fundamental principles. whether, you have 100 pretty boxes, trillion boxes, the rules remain the same. The real magic is to understand those rules and simulate them using lines of code. Thinking, feeling or experiencing are higher cognitive functions resulting from the interactions of those pretty little boxes. And obviously to get to these cognitive functions, you need more boxes. That's totally fair. But even at small scale, if those rules are correct, you will start seeing early signs which will mature into cognitive functions you are referencing. Think of a development of newborn into an adult. So if you get the rules right, it is completely justified calling it an artificial brain. A toy car is essentially also a car, both governed by same laws of motion. The logic and definition of the use of words(spin) is formally defined in the report. I hope this clarifies some of the skepticism.