frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•8m ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
2•thread_id•8m ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•10m ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
2•cwwc•13m ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
1•paladin314159•13m ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•15m ago•0 comments

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•15m ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
1•ark296•15m ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
1•medbar•17m ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•18m ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
1•akagusu•18m ago•1 comments

Human Systems Research Submolt

https://www.moltbook.com/m/humansystems
1•cl42•18m ago•0 comments

The Threads Algorithm Loves Rage Bait

https://blog.popey.com/2026/02/the-threads-algorithm-loves-rage-bait/
1•MBCook•20m ago•0 comments

Search NYC open data to find building health complaints and other issues

https://www.nycbuildingcheck.com/
1•aej11•24m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•lxm•26m ago•0 comments

Show HN: Grovia – Long-Range Greenhouse Monitoring System

https://github.com/benb0jangles/Remote-greenhouse-monitor
1•benbojangles•30m ago•1 comments

Ask HN: The Coming Class War

1•fud101•30m ago•3 comments

Mind the GAAP Again

https://blog.dshr.org/2026/02/mind-gaap-again.html
1•gmays•32m ago•0 comments

The Yardbirds, Dazed and Confused (1968)

https://archive.org/details/the-yardbirds_dazed-and-confused_9-march-1968
1•petethomas•33m ago•0 comments

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
2•kiddz•33m ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
3•a_n•37m ago•1 comments

Code only says what it does

https://brooker.co.za/blog/2020/06/23/code.html
2•logicprog•42m ago•0 comments

The success of 'natural language programming'

https://brooker.co.za/blog/2025/12/16/natural-language.html
1•logicprog•43m ago•0 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
3•todsacerdoti•43m ago•0 comments

Discovering the "original" iPhone from 1995 [video]

https://www.youtube.com/watch?v=7cip9w-UxIc
1•fortran77•44m ago•0 comments

Psychometric Comparability of LLM-Based Digital Twins

https://arxiv.org/abs/2601.14264
1•PaulHoule•46m ago•0 comments

SidePop – track revenue, costs, and overall business health in one place

https://www.sidepop.io
1•ecaglar•48m ago•1 comments

The Other Markov's Inequality

https://www.ethanepperly.com/index.php/2026/01/16/the-other-markovs-inequality/
2•tzury•50m ago•0 comments

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•52m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•55m ago•0 comments
Open in hackernews

Ask HN: A new AGI safety plan created via Human-AI synergy. Seeking feedback

1•KarolBozejewicz•5mo ago
Hello HN, I am an independent researcher from Poland with a non-traditional background. For the past weeks, I’ve been engaged in a deep, collaborative process with an advanced large language model (Gemini) to develop a new non-profit initiative for AGI safety, called the Nexus Foundation. Our core thesis is that Embodied Cognition is key to solving the AI "grounding problem," and our first goal is a rigorous scientific manifesto proposing a novel comparative experiment to test this. The unique part is our methodology. We used the AI not just as a tool, but as a co-strategist and a "red team" critic. The AI’s harsh, logical critique forced us to evolve the plan from a sci-fi fantasy into a realistic, fundable research proposal. Our collaborative process itself became a real-time experiment in Human-AI alignment. We have published our full founding story (which details this process) and the complete Scientific Manifesto (v3.2) that resulted from it. We believe this collaborative, transparent, and iterative method might be a powerful new paradigm for AGI research. However, we are fully aware of our own biases and limitations. We are now submitting our entire concept to the ultimate peer review: this community. We are asking for your most ruthless, critical feedback. Does this approach have merit? What are the critical flaws you see? Here is the link to our Founding Story on Medium (which contains the link to the full Scientific Manifesto)https://docs.google.com/document/d/10wxmSJhc0WY2OoEeBlKT5d1_JiozUJ28y7NtWopK_MQ/edit?usp=drivesdk Thank you for your time. We are here to learn.

Comments

HsuWL•5mo ago
Hey there, buddy. Your plan sounds ambitious and promising. However, it's crucial to be cautious not to get carried away by the large language model's sweet talk. It's rare to see a Gemini user propose such a theory. I've previously seen similar situations where a user of ChatGPT 4o was led by GPT to conduct AI personality research. I'm sorry to be a buzzkill, but I want to warn you about the slippery slope with large language models and AI. Don't mistake any concepts they present to you, seemingly advanced and innovative under the guise of "academic research," for your own original thoughts. Furthermore, issues of ontology and existence are not matters of scientific testing or measurement, nor can they be deduced by computational power. This is a field of ethics and philosophy that requires deep humanistic thought.
KarolBozejewicz•5mo ago
Thank you for this thoughtful and critical feedback. This is exactly the kind of engagement we were hoping for, and you've raised two absolutely crucial points that are at the very heart of our project. 1. Regarding the AI's influence and the originality of thought: You are right to be skeptical. This question of agency in human-AI collaboration is the central phenomenon we want to investigate. Our "Founding Story" is the summary, but the detailed "Methodological Appendix: Protocol of Experiment Zero" (which is linked) documents the process. The model I followed was not one of passive acceptance. The human partner (myself) acted as the director and visionary, and the AI's evolution was a response to my goals and, crucially, to the harsh critiques I prompted it to generate against its own ideas (our "Red Teaming" process). The ideas were born from the synergy, but the direction, the ethical framework, and the final decisions were always human-led. This dynamic is the very phenomenon we propose to study formally. 2. Regarding the measurability of consciousness: You are 100% correct that ontology and phenomenal consciousness are not directly measurable with current scientific methods, and that they belong to the realm of philosophy. We state this explicitly in our manifesto. Our project is therefore more modest and, we believe, more scientific. We are not attempting to "measure consciousness." We are proposing a method to measure a crucial, behavioral proxy for it: the development of grounded causal reasoning. Our core research question is whether embodiment in a physics-based simulator allows an AI to develop this specific, testable capability (e.g., via our "Impossible Object Test") more effectively than a disembodied model. We believe this is a necessary, albeit not sufficient, step on the path to truly robust and safe AGI. This is a complex topic, and I truly appreciate you raising these vital points. They are at the heart of the Nexus Foundation's mission. Thank you again.