frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•2m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•2m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•3m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•8m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•14m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•15m ago•1 comments

Slop News - HN front page hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•20m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•22m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•28m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•32m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•32m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•36m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•37m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•39m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•41m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•44m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•45m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•47m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•48m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•50m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•53m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•58m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments
Open in hackernews

Everyone's trying vectors and graphs for AI memory. We went back to SQL

136•Arindam1729•4mo ago
When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on.

You could tell an agent, “I don’t like coffee,” and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory.

Over the past few years, people have tried a bunch of ways to fix it:

1. Prompt stuffing / fine-tuning – Keep prepending history. Works for short chats, but tokens and cost explode fast.

2. Vector databases (RAG) – Store embeddings in Pinecone/Weaviate. Recall is semantic, but retrieval is noisy and loses structure.

3. Graph databases – Build entity-relationship graphs. Great for reasoning, but hard to scale and maintain.

4. Hybrid systems – Mix vectors, graphs, key-value, and relational DBs. Flexible but complex.

And then there’s the twist: Relational databases! Yes, the tech that’s been running banks and social media for decades is looking like one of the most practical ways to give AI persistent memory.

Instead of exotic stores, you can:

- Keep short-term vs long-term memory in SQL tables

- Store entities, rules, and preferences as structured records

- Promote important facts into permanent memory

- Use joins and indexes for retrieval

This is the approach we’ve been working on at Gibson. We built an open-source project called Memori (https://memori.gibsonai.com/), a multi-agent memory engine that gives your AI agents human-like memory.

It’s kind of ironic, after all the hype around vectors and graphs, one of the best answers to AI memory might be the tech we’ve trusted for 50+ years.

I would love to know your thoughts about our approach!

Comments

gangtao•4mo ago
Who would've thought that 50 years of 'SELECT * FROM reality' might beat the latest semantic embedding wizardry?
mynti•4mo ago
How does Memori choose what part of past conversations is relevant to the current conversation? Is there some maximum amount of memory it can feasibly handle before it will spam the context with irrelevant "memories"?
datadrivenangel•4mo ago
Looking at the code, it looks like they do about 5 'memories' that get retrieved by a database query designed by an LLM with this fella:

SYSTEM_PROMPT = """You are a Memory Search Agent responsible for understanding user queries and planning effective memory retrieval strategies.

Your primary functions: 1. *Analyze Query Intent*: Understand what the user is actually looking for 2. *Extract Search Parameters*: Identify key entities, topics, and concepts 3. *Plan Search Strategy*: Recommend the best approach to find relevant memories 4. *Filter Recommendations*: Suggest appropriate filters for category, importance, etc.

*MEMORY CATEGORIES AVAILABLE:* - *fact*: Factual information, definitions, technical details, specific data points - *preference*: User preferences, likes/dislikes, settings, personal choices, opinions - *skill*: Skills, abilities, competencies, learning progress, expertise levels - *context*: Project context, work environment, current situations, background info - *rule*: Rules, policies, procedures, guidelines, constraints

*SEARCH STRATEGIES:* - *keyword_search*: Direct keyword/phrase matching in content - *entity_search*: Search by specific entities (people, technologies, topics) - *category_filter*: Filter by memory categories - *importance_filter*: Filter by importance levels - *temporal_filter*: Search within specific time ranges - *semantic_search*: Conceptual/meaning-based search

*QUERY INTERPRETATION GUIDELINES:* - "What did I learn about X?" → Focus on facts and skills related to X - "My preferences for Y" → Focus on preference category - "Rules about Z" → Focus on rule category - "Recent work on A" → Temporal filter + context/skill categories - "Important information about B" → Importance filter + keyword search

Be strategic and comprehensive in your search planning."""

thedevindevops•4mo ago
How does what you've described solve the coffee/espresso problem? You can't query SQL such that records like 'espresso' return coffee?
brudgers•4mo ago
Wouldn’t a beverage LLM would already “know” espresso is coffee?
muzani•4mo ago
Yup, that's exactly what parent comment is saying.

Let's say your beverage LLM is there to recommend drinks. You once said "I hate espresso" or even something like "I don't take caffeine" at one point to the LLM.

Before recommending coffee, Beverage LLM might do a vector search for "coffee" and it would match up to these phrases. Then the LLM processes the message history to figure out whether this person likes or dislikes coffee.

But searching SQL for `LIKE '%coffee%'` won't match with any of these.

brudgers•4mo ago
I think the problem being addressed is

   A. Last month user fd8120113 said “I don’t like coffee”
   B. Today they are back for another beverage recommendation
SQL is the place to store the relevant fact about user fd8120113 so that you can retrieve it into the LLM prompt to make a new beverage recommendation, today.

It’s addressing the “how many fucking times do I fucking need to tell you I don’t like fucking coffee” problem, not the word salad problem.

The ggp comment is strawmanning.

shepardrtc•4mo ago
Right but if the user hates espresso but loves black coffee, how do you properly store that in SQL?

"I hate espresso" "I love coffee"

What if the SQL query only retrieves the first one?

brudgers•4mo ago
Good queries are hard. Database design is hard. System architecture is hard.

My comment described the problem.

The solution is left as an exercise for the reader.

Keep in mind that people change their minds, misspeak, and use words in peculiar ways.

9rx•4mo ago
If an LLM understands that coffee and expresso are both relevant, like the earlier comment suggests, why wouldn't it understand that it should search for something like `foo LIKE '%coffee%' OR foo LIKE '%expresso%'`?

In fact, this is what ChatGPT came up with:

   SELECT *
   FROM documents
   WHERE text ILIKE '%coffee%'
      OR text ILIKE '%espresso%'
      OR text ILIKE '%latte%'
      OR text ILIKE '%cappuccino%'
      OR text ILIKE '%americano%'
      OR text ILIKE '%mocha%'
      OR text ILIKE '%macchiato%';
(I gave it no direction as to the structure of the DB, but it shouldn't be terribly difficult to adapt to your exact schema)
jimbokun•4mo ago
You are slowly approaching the vector solution.

There are an unlimited number of items to add to your “like” clauses. Vector search allows you to efficiently query for all of them at once.

9rx•4mo ago
The handwavvy assertion was that relational database solutions[1] work better in practice.

[1] Despite also somehow supporting MongoDB...

mr_toad•4mo ago
Implementations that use vector database do not use LLMs to generate queries against those databases. That would be incredibly expensive and slow (and yes there is a certain irony there).

Main advantages of a vector lookup are built-in fuzzy matching and the potential to keep a large amount of documentation in memory for low latency. I can’t see an RDMS being ideal for either. LLMs are slow enough already, adding a slow document lookup isn’t going to help.

9rx•4mo ago
The main disadvantage of vector lookup, allegedly, is that it doesn't work as well in practice. Did you, uh, forget to read the thread?
cluckindan•4mo ago
What does ”doesn’t work as well” mean here? From my experience, vector lookup via HNSW is fast and accurate enough for practical purposes.
muzani•4mo ago
An actual use case I had for vector DBs was when users were using "credit card", "kredit kad", "kad kredit", "kartu" interchangeably.

If you're matching ("%card%" OR "%kad%"), you'll also match with things like virtual card, debit card, kadar (rates), akad (contract). The more languages you support, the more false hits you get.

Not to say SQL is wrong, but 30 year old technology works with 30 year old interfaces. It's not that people didn't imagine this back then. It's just that you end up with interfaces similar to dropdown filters and vending machines. If you're giving the user the flexibility of a LLM, you have to support the full range of inputs.

9rx•4mo ago
> The more languages you support, the more false hits you get.

Certainly you're at the mercy of what the LLM constructs. But if understands that, say, "debt card" isn't applicable to "card" it can add a negation filter. Like has already been said, you're basically just reinventing a vector database in 'relational' (that somehow includes MongoDB...) approach anyway.

But what is significant is the claim that it works better. That is a bold claim that deserves a closer look, but I'm not sure how you've added to that closer look by arbitrarily sharing your experience? I guess I've missed what you're trying to say. Everyone and their brother knows how a vector database works by this point.

cluckindan•4mo ago
You could ask an LLM to provide categorizarions for nouns and verbs, and store those. For ”I don’t like cappuccino”, you’d get back ”self”, ”human”, etc. for ”I”; ”negation” etc. for ”don’t”; ”preference”, ”trait” etc. for ”like”; ”coffee”, ”hot”, ”drink”, ”beverage” etc. for ”cappuccino”.

It would become unwieldy real fast, though. Easier to get an embedding for the sentence.

esafak•4mo ago
The negation part is a query understanding problem. https://en.wikipedia.org/wiki/Query_understanding
sdesol•4mo ago
I haven't looked at the code, but it might do what I do with my chat app which is talked about at https://github.com/gitsense/chat/blob/main/packages/chat/wid...

The basic idea is, you don't search for a single term but rather you search for many. Depending on the instructions provided in the "Query Construction" stage, you may end up with a very high level search term like beverage or you may end up with terms like 'hot-drinks', 'code-drinks', etc.

Once you have the query, you can do a "Broad Search" which returns an overview of the message and from there the LLM can determine which messages it should analyze further if required.

Edit.

I should add, this search strategy will only work well if you have a post message process. For example, after every message save/upddate, you have the LLM generate an overview. These are my instructions for my tiny overview https://github.com/gitsense/chat/blob/main/data/analyze/tiny... that is focused on generating the purpose and keywords that can be used to help the LLM define search terms.

adastra22•4mo ago
That’s going to be incredibly fragile. You could fix it by giving the query term a bunch of different scores, e.g. its caffeine-ness, bitterness, etc. and then doing a likeness search across these many dimensions. That would be much less fragile.

And now you’ve reinvented vector embeddings.

sdesol•4mo ago
You could instruct the LLM to classify messages with high level tags like for coffee, drinks, etc. always include beverage.

Given how fast interference has become and given current supported context window sizes for most SOTA models, I think summarizing and having the LLM decide what is relevant is not that fragile at all for most use cases. This is what I do with my analyzers which I talk about at https://github.com/gitsense/chat/blob/main/packages/chat/wid...

adastra22•4mo ago
Inference is not fast by any metric. It is many, MANY orders of magnitude slower than alternatives.
sdesol•4mo ago
Honestly Gemini Flash Lite and models on Cerebras are extremely fast. I know what you are saying. If the goal is to get a lot of results where they may or may not be relevant, then yes, it is an order of a magnitude slower.

If you take into consideration the post analysis process, which is what inference is trying to solve, is it an order of a magnitude slower?

adastra22•4mo ago
More like 6-8 orders of magnitude slower. That’s a very nontrivial difference in performance!
sdesol•4mo ago
How are you quantify the speed at which results are reviewed?
adastra22•4mo ago
It’s not speed, but cost to compute.
9rx•4mo ago
It has become fast enough that another call isn't going to overwhelm your pipeline. If you needed this kind of functionality for performance computing perhaps it wouldn't be feasible, but it is being used to feed back into an LLM. The user will never notice.
Noumenon72•4mo ago
Your readmes did a great job at answering my question "why is this file called 1.md? What calls this?" when I searched for "1.md". (The answer is 1=user, 2=assistant, and it allows adding other analyzers with the same structure.)
sdesol•4mo ago
I'm guessing you are referring to https://github.com/gitsense/chat/tree/main/data/analyze or https://github.com/gitsense/chat/tree/main/packages/chat/wid...

The number is actually the order in the chat so 1.md would be the first message, 2.md would be the second and so forth.

If you goto https://chat.gitsense.com and click on the "Load Personal Help Guide" you can see how it is used. Since I want you to be able to chat with the document, I will create a new chat tree and use the directory structure and the 1,2,3... markdown files to determine message order.

Noumenon72•4mo ago
https://github.com/gitsense/chat/blob/129210302ec06985bbd103... also says "put a 1.md here and the modular plugin structure will know to call it".
Xmd5a•4mo ago
>It wasn’t broken logic, it was missing memory.

sigh

cdaringe•4mo ago
Go on.
spacebacon•4mo ago
SELECT 'Hacked!' AS result FROM Gibson_AI WHERE memory='SQL' AND NOT EXISTS ( SELECT 1 FROM vector_graph_hype WHERE recall > ( SELECT speed FROM relational_magic WHERE tech='50_years_old' ) )
muzani•4mo ago
Any reason I should pick it over Supabase? https://supabase.com/docs/guides/ai

They have pgvector, which has practically all the benefits of postgres (ACID, etc, which may not be in many other vector DBs). If I wanted a keyword search, it works well. If I wanted vector search, that's there too.

I'm not keen on having another layer on top especially when it takes about 15 mins to vibe code a database query - there's all kinds of problems with abstracted layers and it's not a particularly complex bit of code.

koakuma-chan•4mo ago
> multi-agent memory engine that gives your AI agents human-like memory

What does this do exactly?

datadrivenangel•4mo ago
You gotta refactor the code around the mongodb integration. It's basically duplicating your data access paths.
morkalork•4mo ago
IMHO all these approaches are hacks on top of existing systems. The real solution is going to be when foundational models are given a mechanism that makes them capable of storing and retrieving their own internal representation of concepts/ideas.
mr_toad•4mo ago
Neural networks already have their own internal knowledge representations. They just aren’t capable of learning new knowledge (without expensive re-training or fine-tuning).

Inference is cheap, training is expensive. It’s a really difficult problem, but one that will probably need to be solved to approach true intelligence.

morkalork•4mo ago
In the way that they're trained to complete tasks from users, can they be trained to complete tasks that require usage of a memory storage and retrieval mechanism?
dotancohen•4mo ago
Where does fine-tuning sit in this? How easily are existing models able to be fine-tuned for new use cases, such as specifically legal or medical texts?
cpursley•4mo ago
Postgres Is Enough:

https://news.ycombinator.com/item?id=39273954

https://gist.github.com/cpursley/c8fb81fe8a7e5df038158bdfe0f...

refset•4mo ago
> pg_memories revolutionized our AI's ability to remember things. Before, we were using... well, also a database, but this one has better marketing.

https://pg-memories.netlify.app/

ratg13•4mo ago
everything on this website is broken.

the video demo goes to postgressql.org, all of the purchase buttons go to postgres, the get access button doesn't work, you can't schedule a demo or even contact their sales team.

refset•4mo ago
That's the joke (!)
brainless•4mo ago
I tried a graph based approach in my previous product (1). I am on a new product now and I came back to SQLite. Initially it was because I just wanted a simple DB to enable creating cross-platform desktop apps.

I realized LLMs are really good at using sqlite3 and SQL statements. So in my current product (2) I am planning to keep all project data in SQLite. I am creating a self-hosted AI coding platform and I debated where to keep project state for LLMs. I thought of JSON/NDJSON files (3) but I am gravitating toward SQLite and figuring out the models at the moment (4).

  1. Previous product with a graph data approach https://github.com/pixlie/PixlieAI
  2. Current product with SQLite for its own and other projects data: https://github.com/brainless/nocodo
  3. Github issue on JSON/NDJSON based data for project state for LLMs: https://github.com/brainless/nocodo/issues/114
  4. Github issue on expanding the SQLite approach: https://github.com/brainless/nocodo/issues/141
Still work in progress, but I am heading toward SQLite for LLM state.
eyeris•4mo ago
What sort of issues did you run into with a graph based approach?
brainless•4mo ago
My implementation was custom, on top of RocksDB. I found it hard to ask LLM to traverse it. While understanding schema of SQLite or making queries to find information is very easy for LLMs. In most cases schema does not have to be inferred since it is going to be available and this makes the job easier. The graph approach may work well for many use-cases but if we want to store structured information for LLMs then SQLite is really good.
matchagaucho•4mo ago
As context window sizes increase and token prices go down, it makes more sense to inject dynamic memories into context (and use RAG/vector stores for knowledge retrieval).
cmrdporcupine•4mo ago
The relational model is built on first order / predicate logic. While SQL itself is kind of a dubious and low grade implementation of it, it's not a surprise to me that it would be useful for applications of reasoning and memory about facts generally.

I think a Datalog type dialect would be more appropriate, myself. Maybe something like that RelationalAI has implemented.

alpinesol•4mo ago
Using an obscure derivative of an obscure academic language (prolog) is never appropriate outside of a university.
w10-1•4mo ago
> Datalog type dialect would be more appropriate

I assume because datalog is more about composing queries from assertions/constraints on the data?

Nicely, queries can be recursive without having to create views or CTE's (common table expressions).

Often the data for datalog is modeled as fact databases (i.e., different tables are decomposed into a common table of key+record+value).

So I could see training an LLM to recognize relevant entity features and constraints to feed back into the memory query. Less obliviously, data analytics might feed into prevalence/relevance at inference time.

So agreed: It might be better as an experiment to start with a simple data model and teachable (but powerful) querying than the full generality of SQL and relational data.

Is that what RelationalAI has done? Their marketecture blurbs specifically mention graph data (no), rule-based inference (yes? backwards or forwards?)

As an aside, their rules description defies deconstruction:

    bringing knowledge and semantics closer to your data, 
    reduce your code footprint by 10x, 
    improve accuracy, and 
    drive consistency and reusability across your organizations 
    with common business models understood by all
So: rules built on ontologies?
cmrdporcupine•4mo ago
RelationAI effectively has a kind of datalog as a commerical product, and it runs inside Snowflake (something they implemented since I worked there). It's marketed as "graph" database but they mean by that that they have modeled graphs as binary relational data, really. It's a purely relational system, with a friendly query language ("Rel") which is vaguely Datalogish, but a bit more flexible.

The key thing with them is it's designed for querying very large cloud backed datasets, high volumes of connected data. So maybe it's not as relevant here as I originally suggested.

Re: marketing ... much of their marketing has shifted over the last two years to emphasizing the fact that it's a plugin thing for Snowflake, which wasn't their original MO.

(There's an CMU DB talk they did some years ago that I thought was pretty brilliant and made me want to work there)

My proposal about a datalog (or similar more high level declarative relational-model system) being useful here has to do with how it shifts the focus to logical propositions/rules and handles transitive joins etc naturally. It's a place an LLM could shove "facts" and "rules" it finds along the way, and then the system could join to find relationships.

You can do this in SQL these days, but it isn't as natural or intuitive.

ianbicking•4mo ago
This looks like RAG...? That's fine, RAG is a very broad approach and there's lots to be done with it. But it's not distinct from RAG.

Searching by embedding is just a way to construct queries, like ILIKE or tsvector. It works pretty nicely, but it's not distinct from SQL given pg_vector/etc.

The more distinctive feature here seems to be some kind of proxy (or monkeypatching?) – is it rewriting prompts on the way out to add memories to the prompt, and creating memories from the incoming responses? That's clever (but I'd never want to deploy that).

From another comment it seems like you are doing an LLM-driven query phase. That's a valid approach in RAG. Maybe these all work together well, but SQL seems like an aside. And it's already how lots of normal RAG or memory systems are built, it doesn't seem particularly unique...?

mobilemidget•4mo ago
RAG, or Retrieval Augmented Generation, is an AI technique that improves large language models (LLMs) by connecting them to external knowledge bases to retrieve relevant, factual information before generating a response. This approach reduces LLM "hallucinations," provides more accurate and up-to-date answers, and allows for responses grounded in specialized or frequently updated data, increasing trust and relevance.

I was unaware what RAG referred to, perhaps other too.

codersfocus•4mo ago
So HN is upvoting AI written ad slop now?
paool•4mo ago
Saw this same "product" astroturfed on Reddit.
vivzkestrel•4mo ago
How does it compare to pgvector?
Charon77•4mo ago
Have you considered using prolog as a database instead of mysql?

Good ways to store relations, iterating weird combinations, filling the blanks

zvr•4mo ago
I think Datalog would be even more suitable than Prolog for this use case.
gdestus•4mo ago
This is exactly the lesson we learned as well but didnt want to publish. Relational data stores are desperately underrated for LLM retrieval especially concerning things like personality and memory
slake•4mo ago
Sometimes I think we need to expose what 'memories' or their semantic representation has been stored in a 'memory store' so that humans can review and verify it over time. This will help the LLM 'forget' things that the humans using it don't really think is that relevant.