frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•5m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•7m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•11m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•13m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•19m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•23m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•23m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•27m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•28m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•30m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•32m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•35m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•36m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•38m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•39m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•41m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•44m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•49m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•51m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•54m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments
Open in hackernews

MARM Protocol: Enhancing LLM Memory and Mitigating Hallucinations

https://github.com/Lyellr88/MARM-Protocol
1•CogniFlow•8mo ago

Comments

CogniFlow•8mo ago
If you’ve worked with large language models, you’ve probably faced two persistent issues: memory loss and hallucinations. These aren’t just minor inconveniences, they’re major obstacles to building reliable long term AI workflows.

MARM Protocol (Memory Accurate Response Mode) is a structured, prompt based approach designed to address these challenges. It’s not a new model, but a protocol for interacting with existing LLMs to encourage more disciplined, consistent, and accurate behavior. MARM was developed based on feedback from over 150 advanced AI users.

The Problem: Why LLMs Forget and Fabricate:

Modern LLMs are powerful, but they have real limitations. They tend to lose context in longer conversations because they’re mostly stateless, and while they generate convincing text, that doesn’t always mean it’s accurate. This leads to hallucinations, which undermine trust and force users to constantly double check results. For developers and power users, this means extra work to re-contextualize and verify information.

How MARM Protocol Brings Discipline to Your AI:

MARM Protocol helps by embedding a strict job description and self-management layer directly into the conversation flow. It’s not just about longer prompts; it’s about replacing default AI behaviors with a more reliable protocol.

At its core, MARM features a session memory kernel and accuracy guardrails. The session memory kernel tracks user inputs, intent, and history to maintain context. It also organizes information into named sessions for easy recall, and enforces honest memory reporting and if the AI can’t remember, it says so (e.g., "I don’t have that context, can you restate?"). It also makes it easy to resume, archive, or start fresh sessions. The accuracy guardrails perform internal self-checks to ensure responses are consistent with context and logic. They flag uncertainty when needed (e.g., “Confidence: Low – I’m unsure on [X]. Would you like me to retry or clarify?”), and provide reasoning trails for transparency and debugging (e.g., “My logic: [recall/synthesis]. Correct me if I am off.”).

Practical Impact for Developers and Power Users:

By using MARM, you can expect better continuity across complex, multi-session projects, fewer hallucinations, and an AI that communicates its limitations transparently. This makes it a more trustworthy tool for critical tasks.

Getting Started: Activate MARM in Seconds:

Getting started is simple: copy the entire initiation prompt from the MARM GitHub repository and paste it as the first message in a new AI chat. The AI will confirm activation (e.g., "MARM activated. Ready to log context."), and you can begin working under the protocol.

Limitations and Nuances:

Keep in mind, MARM is a prompt-based protocol, not a change to the underlying LLM architecture. It can’t execute code or access live external data, and its effectiveness is limited to the current chat session. For best results, engage consistently within sessions. While these are current LLM limitations, MARM provides a robust framework to manage and maximize capabilities within those boundaries.

Contribute and Collaborate:

MARM Protocol is evolving, and feedback or contributions are welcome. See the repository for details:

https://github.com/Lyellr88/MARM-Protocol