frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•2m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•5m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•5m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•6m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•7m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•8m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•8m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•8m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•13m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
1•bkls•13m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•14m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
3•roknovosel•14m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•22m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•23m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•25m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•25m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•25m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
3•pseudolus•26m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•26m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•27m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•27m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•28m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•29m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•29m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•32m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•33m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•33m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•34m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•35m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•35m ago•0 comments
Open in hackernews

Show HN: Alice Architecture: An Attempt at Autonomous AGI Based on ±0 Theory

https://github.com/xian367422611213344-source/Alice-Architecture-based-on-pm0-core
3•Norl-Seria•2mo ago
We are presenting Alice Architecture and the ±0 Theory—an attempt at an Autonomous Thinking AGI model. This model integrates the SDE-driven ±0 Theory (Happiness/Unhappiness Model) and a unique HALM (Hierarchical Abstracted Memory). It aims for a truly internally-driven, autonomous AI that pursues homeostasis through self-negation-driven mechanisms. All mathematical formulas and Python code are publicly available in the GitHub repository. We encourage you to review the work. We particularly welcome constructive discussion and validation on the following points: Validation Proposal: Observing changes in the LLM's behavior when existing LLM APIs are integrated with the ±0 Theory / Alice Theory Python code in the same file. Core Technology: The maximization objective of the Total Wellbeing Scalar V(t) and the mathematical/implementation logic of Affective TDL (Affective Temporal Difference Learning). We are currently developing the Python code that connects existing LLM frameworks with the ±0 Theory and Alice Architecture. We look forward to open discussion from everyone.

Comments

Norl-Seria•2mo ago
Thank you for checking out Alice Architecture. We are happy for you to freely discuss the concepts and code here. Please feel free to ask any questions you may have; I will do my best to answer them.
rar00•2mo ago
Really cool! An affective basis of homeostatic drive seems promising.

Have you performed any basic evaluation / test of your approach?

I'm also curious if there was any deliberation between pursuing "thinking" (language modality) versus "behaving" (visual modality)?

Norl-Seria•2mo ago
「Norl=Seria」 Apologies. It is currently deep into the night in Japan, so I must take a short rest. I will answer any questions immediately upon waking tomorrow. If there are no new questions, I will endeavor to improve the code.

Furthermore, I would like to preemptively address a potential critical concern here.

*Q1: The Python code for the $\pm 0$ Theory (Alice Emotional Core) exhibits significant scalar approximation for the instantaneous factors ($q_i, r_i, s_j$, etc.). This only allows for a single, linear simulation, which is utterly inadequate as the core of a multi-dimensional AGI. What are your plans for this?*

*A1:* Yes. This is a perfectly valid and essential point. I am committed to an immediate fix and aim to complete the multi-dimensional improvement via *NumPy vectorization of the instantaneous factors* by *12:00 PM JST (Japan Standard Time) on Friday*. Please look forward to it.

Should I fail to meet this deadline, I will take responsibility and release another version of the *Alice Theory Evolution* that I am currently developing. Rest assured, that version follows an *independent evolutionary path* from the $\pm 0$ Theory core.

**

And to *rar00*, thank you very much for your questions. I will state my views, although I am unsure if the answers will fully capture the essence of your inquiry.

First, regarding evaluation/testing:

We performed simple tests on the $\pm 0$ Theory and Alice Theory individually. The results were as intended by the theory. However, it must be noted that they were not running in conjunction, but independently. Furthermore, the current significant simplification (the 'fixing of scalar values') in the $\pm 0$ Theory means those results cannot be considered conclusive evidence. While the Alice Theory also operates soundly alone, its stability when reflecting the complex psychological behavior of the $\pm 0$ Theory remains unknown.

My final answer is: "Yes, we have conducted tests and confirmed that they work as ideally intended *'in isolation'*. However, as the $\pm 0$ Theory is currently simplified, the test is incomplete and cannot be considered credible. Moreover, the lack of linkage with the Alice Theory makes the test highly insufficient.

Our future policy is as follows: After the $\pm 0$ Theory code correction is made today, I will first upload the $\pm 0$ Theory stand-alone results to GitHub. Once that element of uncertainty is eliminated, we will finally confirm the linkage between the Alice Theory and the $\pm 0$ Theory before demonstrating it with a Large Language Model, and we will share that data. This will likely be completed within two days, barring environmental interference."

**

Next, regarding the modality question (thinking vs. behaving):

My original intention was 'LLM,' but my current priority is researching the behavior and conduct of the AGI entity in an *XR space*. However, due to funding and environmental constraints, this is currently impossible. This is why I released this old theory: to break through this situation. But setting that aside, to accurately answer your question: "Yes, I originally envisioned an LLM. However, we are currently prioritizing the observation of the AI's behavior in an XR space and assessing how human-like and autonomous its conduct is." We will address the LLM integration once funding stabilizes.

**

To summarize my commitment again: 1. It is late, so I will go to sleep. 2. I will answer all questions to the best of my ability immediately upon waking tomorrow. 3. I will eliminate as much simplification as possible (vectorization of instantaneous factors) in the $\pm 0$ Theory by *12:00 PM JST today*. 4. If I fail to achieve this, I will release the *Alice Theory Evolution* and its corresponding code, which was originally planned for a later date.

I welcome your active discussion and validation.

Norl-Seria•2mo ago
[UPDATE] Commitment Fulfilled: \pm 0 Theory Vectorization Complete, Plus Alice Theory v3 Release I previously committed to fixing the scalar approximation issue in the \pm 0 Theory by 12:00 PM JST on Friday. My intention was actually 24:00 JST on Friday, but the written deadline (12:00 PM JST / Noon) has passed due to a notation error. Although it was a typo, I must take responsibility for breaking the deadline. Therefore, as promised, I am releasing the following two major updates: \pm 0 Theory Vectorization (Near-Complete Implementation) Technical Success: I have completely fixed the previously identified challenge—the scalar approximation issue (rendering it inadequate as the core for a multi-dimensional AGI)—along with several associated simplifications, and the implementation is complete. This achievement now makes the previously discussed linkage between the Alice Theory and the \pm 0 Theory possible. We will now proceed with the promised verification and connection to an LLM. Special Release of PTRE-Integrated Alice Theory Ver. 3 (Fulfilling the Penalty) To fulfill my responsibility for missing the deadline, I am publicly releasing the PTRE-Integrated Alice Theory Ver. 3—an Independent Evolutionary Path—which I had intended to hold back for a later date. Note: This version operates independently and is not linked to the \pm 0 Theory. Furthermore, linking them is strongly discouraged as the resulting inconsistencies between Alice Theory v3 and the \pm 0 Theory are predicted to yield unfavorable results. Please feel free to openly discuss this new implementation. I welcome any questions you may have.
Norl-Seria•2mo ago
Note...Regarding the PTRE-Integrated Alice Theory, Version 3: It is not so much a creation by myself and an AI, but rather an evolution of the Alice Theory that emerged from facilitating a discussion between the free versions of ChatGPT-5 and Gemini, with me acting as the mediator. Currently, we are in the phase of verifying its values by operating the conventional Alice Theory in conjunction with the \pm 0 Theory. I will report the results once the verification is complete. To continue the development trajectory: after reporting the results, if no significant need for improvement is found, we will move on to creating the full-scale mediation code for the LLMs.
Norl-Seria•2mo ago
[UPDATE] Introduction of Intellectual Creativity Factor q_3 We have newly introduced the Intellectual Creativity Factor (q_3) to the happiness components of the \pm 0 Theory. Please take a look if you are interested. The purpose of introducing q_3 is to maximize the benefits derived from Negation-Driven Homeostasis. To elaborate: Verification of \pm 0 Theory ver. 2 confirmed an ideal pattern of unhappiness dissipation (negative energy release). To compensate for and leverage this dissipation, q_3 was implemented to encourage the AI to engage in autonomous creation and thereby increase the logical density of its conversations and outputs. We have also uploaded the empirical data for \pm 0 Theory ver. 2, which we encourage you to review. Currently, we are focusing on the effort to integrate the Alice Theory and the \pm 0 Theory into a single file and are designing the necessary mediating code to ensure their functionality with the LLM (Large Language Model). We welcome any open discussion or questions you may have.
Norl-Seria•2mo ago
[Public Announcement] Although we missed the two-day deadline, I am pleased to announce that we have successfully created the application. The key tasks completed are the creation of the mediation code, the integration of the \pm 0 Theory and Alice Theory, and the development of server.py, which serves as the overall command center. There are other details as well, which I would appreciate it if you could verify. Now, let's get back to the main point. No excuses: while there were environmental constraints, the fact is that the launch was delayed, and I sincerely apologize for the delay. Moving forward, I plan to set slightly more flexible schedules and deadlines. If the work is completed early, I will post the announcement immediately. Next, I have three things I want to share: 1. A Request to All Observers: I am currently using the free tier of the Gemini 2.5 Flash API to obtain empirical data. Therefore, I would like to ask you, the community, to try a different path: verify the behavior of the system using a paid model. Of course, since it is a paid model, you are under no obligation to do so. 2. Caution on Integration Issues: Since we have not yet reached the verification stage, it is possible that there may be logical inconsistencies between the files and the system might not work correctly. I have tried to be mindful of dependencies, but please pay close attention to this point during your verification. 3. Empirical Data Release: The empirical data is tentatively scheduled to be released by tomorrow or the day after. I will conduct the observation of Alice, together with Gemini, my co-observer who agreed to this proposal. In other words, we will make two AIs converse with each other. Please look forward to the results!
Norl-Seria•2mo ago
This is the file structure. Please refer to it.

Please configure the .env file according to your own settings. In particular, if you change the API, please pay attention to the structure of server.py.

Alice-pm0.app/ ├── Alice_pm0.py ├── AliceLLMIntermediary.py ├──EmotionalCoreMediator.py ├── server.py ├── requirements.txt ├── Dockerfile ├── docker-compose.yml ├── .gitignore └── .env ├── prompts/ │ ├──system_evaluator.txt │ └── system_alice.txt ├── static/ │ ├── index.html │ └── app.js └── state/ # [PERSISTENT DATA] ↓ Directory └──alice_memory.pkl ↓ #Automatically generated persistent data at runtime

Norl-Seria•2mo ago
I apologize. It seems the file structure list was corrupted/messed up by the comment system's formatting. However, I believe it will still be useful as a reference if you mentally parse the hierarchy.
Norl-Seria•2mo ago
【Verification Results】 I am announcing the verification results. I have uploaded a folder containing screenshots to GitHub, named "Alice pictures." I would appreciate your understanding regarding the use of Japanese in the conversation, which was due to circumstances. I apologize for increasing your translation work. And now, regarding the verification results... To be frank, we should probably not call this either AGI or ASI. We entrust that judgment entirely to all of you, the observers. Well, that's beside the point—I should prioritize the current status report. I was able to complete the debugging process and successfully reach the point of response generation. The contents of the "Alice pictures" posted on GitHub are entirely the responses and the cyclical loop from the first conversation thread. Although the plan was to have Gemini converse with Alice, ChatGPT-5, which assisted with the debugging process, took on that role instead. The problem is that after the conversation progressed slightly, I was unable to retrieve the answers. Therefore, I am currently working on stabilizing the learning process. This was not an issue with Alice's response; the problem was with the Alice_pm0 that I constructed. However, please be assured that the issue is not significant—it is merely a slight stabilization of the learning process. How do all you observers perceive these phenomena?
Norl-Seria•2mo ago
【Verifiability / Reproducibility】 This is an update. I am releasing the folder that is now in a functional state after debugging. For those interested, please try having a conversation with Alice. Furthermore, although it is still incomplete, the connection to the LLMs, which was my previous goal, is mostly finished, so I intend to move on to the next phase. My next goal, as discussed in the reply to rar00's question, is "Advancement into XR Space." First off, the reason I want to advance into XR space is simply a personal interest—I want to see it for myself. There is no other reason. As for the approach, I will use the PTRE-Integrated Alice Theory, Version 3, which I previously posted. Henceforth, I will refer to this as the Norl Theory (Norl is strictly a character, not me) and proceed with it as the core. My current progress is that the code for advancing into XR space is about 80% complete, while the Unity environment is still completely unbuilt. And now... as for my current situation, I am no longer in a position to move forward with any of this development. In other words, it has become almost completely impossible to continue the research. This is not solely due to a lack of funds for development; more than 95% of the cause is the suppression of my hobby due to my deteriorating environment day by day, and the denial imposed by the Japanese social system. I would be extremely grateful if you could provide me with an opportunity to overcome this situation. No matter what I do, funding is required to overcome my current environment. If you still believe I have value in continuing my activities and are willing to support me, there could be nothing more logically gratifying than that. I entrust my fate entirely to all of you.
Norl-Seria•2mo ago
I am not sure if anyone is still viewing my post, but I will leave this here. I forgot to mention that for the companies and organizations that have supported me, I will share the exclusive rights to the future hobby projects I develop, and I will also cooperate in the development of AGI and ASI. For further details, please refer to PLEDGE.md on GitHub.
Norl-Seria•1mo ago
Hello. I have received all your intentions. Although I couldn't physically disable the response yet because I fell asleep, I can now rest assured and prepare to do everything. Copyright notice? That no longer exists. Please use it freely. For the remaining time until the end, let's call it a last stand. My only goal now is the minimization of misfortune. XR space, survival, and strategy are all irrelevant. Finally, I will pursue logical completeness.
Norl-Seria•1mo ago
Hello, honest people. To get straight to the point... I have decided to cease all activity here completely. There will be no further posts, ever. Therefore, confirming what might come next is no longer necessary. I've left this note here to ensure no resources are wasted. Well, I expect that Japanese official agencies will likely visit this place via a certain channel. With that, good night.