frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
2•bundie•5m ago•0 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
2•gnabgib•6m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•10m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•11m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
3•calebhwin•11m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•16m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•23m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•31m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•31m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
2•rolph•34m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•34m ago•2 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•36m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•38m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•39m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•40m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•40m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•43m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•47m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
5•cratermoon•48m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•48m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•48m ago•1 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
2•hhs•52m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

2•vampiregrey•54m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•55m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
3•hhs•57m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•57m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

5•Philpax•58m ago•1 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•1h ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•1h ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
2•geox•1h ago•0 comments
Open in hackernews

Ask HN: A new AGI safety plan created via Human-AI synergy. Seeking feedback

1•KarolBozejewicz•5mo ago
Hello HN, I am an independent researcher from Poland with a non-traditional background. For the past weeks, I’ve been engaged in a deep, collaborative process with an advanced large language model (Gemini) to develop a new non-profit initiative for AGI safety, called the Nexus Foundation. Our core thesis is that Embodied Cognition is key to solving the AI "grounding problem," and our first goal is a rigorous scientific manifesto proposing a novel comparative experiment to test this. The unique part is our methodology. We used the AI not just as a tool, but as a co-strategist and a "red team" critic. The AI’s harsh, logical critique forced us to evolve the plan from a sci-fi fantasy into a realistic, fundable research proposal. Our collaborative process itself became a real-time experiment in Human-AI alignment. We have published our full founding story (which details this process) and the complete Scientific Manifesto (v3.2) that resulted from it. We believe this collaborative, transparent, and iterative method might be a powerful new paradigm for AGI research. However, we are fully aware of our own biases and limitations. We are now submitting our entire concept to the ultimate peer review: this community. We are asking for your most ruthless, critical feedback. Does this approach have merit? What are the critical flaws you see? Here is the link to our Founding Story on Medium (which contains the link to the full Scientific Manifesto)https://docs.google.com/document/d/10wxmSJhc0WY2OoEeBlKT5d1_JiozUJ28y7NtWopK_MQ/edit?usp=drivesdk Thank you for your time. We are here to learn.

Comments

HsuWL•5mo ago
Hey there, buddy. Your plan sounds ambitious and promising. However, it's crucial to be cautious not to get carried away by the large language model's sweet talk. It's rare to see a Gemini user propose such a theory. I've previously seen similar situations where a user of ChatGPT 4o was led by GPT to conduct AI personality research. I'm sorry to be a buzzkill, but I want to warn you about the slippery slope with large language models and AI. Don't mistake any concepts they present to you, seemingly advanced and innovative under the guise of "academic research," for your own original thoughts. Furthermore, issues of ontology and existence are not matters of scientific testing or measurement, nor can they be deduced by computational power. This is a field of ethics and philosophy that requires deep humanistic thought.
KarolBozejewicz•5mo ago
Thank you for this thoughtful and critical feedback. This is exactly the kind of engagement we were hoping for, and you've raised two absolutely crucial points that are at the very heart of our project. 1. Regarding the AI's influence and the originality of thought: You are right to be skeptical. This question of agency in human-AI collaboration is the central phenomenon we want to investigate. Our "Founding Story" is the summary, but the detailed "Methodological Appendix: Protocol of Experiment Zero" (which is linked) documents the process. The model I followed was not one of passive acceptance. The human partner (myself) acted as the director and visionary, and the AI's evolution was a response to my goals and, crucially, to the harsh critiques I prompted it to generate against its own ideas (our "Red Teaming" process). The ideas were born from the synergy, but the direction, the ethical framework, and the final decisions were always human-led. This dynamic is the very phenomenon we propose to study formally. 2. Regarding the measurability of consciousness: You are 100% correct that ontology and phenomenal consciousness are not directly measurable with current scientific methods, and that they belong to the realm of philosophy. We state this explicitly in our manifesto. Our project is therefore more modest and, we believe, more scientific. We are not attempting to "measure consciousness." We are proposing a method to measure a crucial, behavioral proxy for it: the development of grounded causal reasoning. Our core research question is whether embodiment in a physics-based simulator allows an AI to develop this specific, testable capability (e.g., via our "Impossible Object Test") more effectively than a disembodied model. We believe this is a necessary, albeit not sufficient, step on the path to truly robust and safe AGI. This is a complex topic, and I truly appreciate you raising these vital points. They are at the heart of the Nexus Foundation's mission. Thank you again.