frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
1•goranmoomin•2m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

1•throwaw12•3m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•4m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•7m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•9m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•10m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•12m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•14m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•16m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•19m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•24m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•25m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•29m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•41m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•43m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•43m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•56m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•59m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments
Open in hackernews

MARM Protocol: Enhancing LLM Memory and Mitigating Hallucinations

https://github.com/Lyellr88/MARM-Protocol
1•CogniFlow•8mo ago

Comments

CogniFlow•8mo ago
If you’ve worked with large language models, you’ve probably faced two persistent issues: memory loss and hallucinations. These aren’t just minor inconveniences, they’re major obstacles to building reliable long term AI workflows.

MARM Protocol (Memory Accurate Response Mode) is a structured, prompt based approach designed to address these challenges. It’s not a new model, but a protocol for interacting with existing LLMs to encourage more disciplined, consistent, and accurate behavior. MARM was developed based on feedback from over 150 advanced AI users.

The Problem: Why LLMs Forget and Fabricate:

Modern LLMs are powerful, but they have real limitations. They tend to lose context in longer conversations because they’re mostly stateless, and while they generate convincing text, that doesn’t always mean it’s accurate. This leads to hallucinations, which undermine trust and force users to constantly double check results. For developers and power users, this means extra work to re-contextualize and verify information.

How MARM Protocol Brings Discipline to Your AI:

MARM Protocol helps by embedding a strict job description and self-management layer directly into the conversation flow. It’s not just about longer prompts; it’s about replacing default AI behaviors with a more reliable protocol.

At its core, MARM features a session memory kernel and accuracy guardrails. The session memory kernel tracks user inputs, intent, and history to maintain context. It also organizes information into named sessions for easy recall, and enforces honest memory reporting and if the AI can’t remember, it says so (e.g., "I don’t have that context, can you restate?"). It also makes it easy to resume, archive, or start fresh sessions. The accuracy guardrails perform internal self-checks to ensure responses are consistent with context and logic. They flag uncertainty when needed (e.g., “Confidence: Low – I’m unsure on [X]. Would you like me to retry or clarify?”), and provide reasoning trails for transparency and debugging (e.g., “My logic: [recall/synthesis]. Correct me if I am off.”).

Practical Impact for Developers and Power Users:

By using MARM, you can expect better continuity across complex, multi-session projects, fewer hallucinations, and an AI that communicates its limitations transparently. This makes it a more trustworthy tool for critical tasks.

Getting Started: Activate MARM in Seconds:

Getting started is simple: copy the entire initiation prompt from the MARM GitHub repository and paste it as the first message in a new AI chat. The AI will confirm activation (e.g., "MARM activated. Ready to log context."), and you can begin working under the protocol.

Limitations and Nuances:

Keep in mind, MARM is a prompt-based protocol, not a change to the underlying LLM architecture. It can’t execute code or access live external data, and its effectiveness is limited to the current chat session. For best results, engage consistently within sessions. While these are current LLM limitations, MARM provides a robust framework to manage and maximize capabilities within those boundaries.

Contribute and Collaborate:

MARM Protocol is evolving, and feedback or contributions are welcome. See the repository for details:

https://github.com/Lyellr88/MARM-Protocol