frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
1•sam256•1m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•2m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•3m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

1•amichail•5m ago•0 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•7m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•8m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•10m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•11m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•12m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•12m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•17m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•20m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•23m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•24m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•24m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•25m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•mitchbob•25m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•26m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•26m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•30m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•33m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•33m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•39m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
6•onurkanbkrc•40m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•40m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•43m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•46m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•46m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•46m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
2•mnming•46m ago•0 comments
Open in hackernews

Go AI Is No Longer a "Black Box"

1•AILab•8mo ago
In 2016, AlphaGO rose to fame, and since then, AI has made remarkable progress in playing strength, efficiency, and versatility. However, its specific reasoning process remains a "black box"—even when it can output win-rate evaluations and move probabilities, it still cannot explain "why a particular move is better" in human language.

May 23, Shanghai Artificial Intelligence Laboratory (Shanghai AI Lab) announced an advanced version of its reasoning Large Model, InternThinker, which not only obtaines professional-level Go skill,but also can demonstrate a transparent chain of thought.

The new - generation InternThinker has achieved breakthroughs in Go tasks—not only demonstrating strong professional-level performance ,but also becoming the first large model to break the "black box" of AI. It can explain the process of playing chess in natural language.

In the fourth game of the match between Lee Sedol and AlphaGo, Lee Sedol played the 78th move at L11. This move, which was called the "divine move" by Gu Li, directly reversed the situation and helped Lee Sedol win the game. In the reproduction of this famous game by researchers, InternThinker evaluated this move as "quite tricky... This move perfectly resolved the threat at L11, re - established control in the center, and laid the groundwork for subsequent attacks." Then it gave the response strategy of playing at L10.

You can click the link to experience InternThinker: https://chat.intern-ai.org.cn/

InternThinker's powerful reasoning capabilities and breakthroughs in Go tasks benefit from its innovative training environment. For complex logical reasoning tasks, accurately obtaining feedback on processes and results is particularly critical. To this end, researchers have built a large-scale, standardized, and scalable interactive verification environment called InternBootcamp — this is equivalent to creating an "accelerated training camp" for the model, enabling it to efficiently acquire professional skills and "grow" rapidly.

The Interaction Process between InternBootCamp and Large Language Models Built on automated code agent construction, InternBootcamp encompasses over 1,000 verification environments that cover a wide range of complex logical reasoning tasks. It effectively assists researchers in the field of large models to conduct explorations based on reinforcement learning. InternBootcamp can generate reasoning tasks with controllable difficulty in a batch and standardized manner, such as Sudoku, decoding games, Go, and scientific tasks, and interact with large models to provide feedback. Through the large-scale construction and mixed training of different professional knowledge, it enables large models to break free from the cumbersome mode of obtaining questions and answers based on data annotation and avoid the deception of traditional reward models, thus achieving a new paradigm for enhancing the reasoning ability of large models.

In addition to Go, InternThinker has also delivered outstanding performance in other tasks. Through mixed reinforcement learning across multiple tasks, InternThinker's average capability on a test suite comprising dozens of tasks exceeds mainstream domestic and international reasoning models such as o3-mini, DeepSeek-R1, and Claude-3.7-Sonnet.

Even in some tasks, its performance far exceeds that of other current reasoning large models.

The open-source link of InternBootcamp: https://github.com/InternLM/InternBootcamp