frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Ferrite – Markdown editor in Rust with native Mermaid diagram rendering

https://github.com/OlaProeis/Ferrite
120•OlaProis•6h ago•44 comments

Show HN: I used Claude Code to discover connections between 100 books

https://trails.pieterma.es/
311•pmaze•14h ago•87 comments

Show HN: VAM Seek – 2D video navigation grid, 15KB, zero server load

https://github.com/unhaya/vam-seek
19•haasiy•4h ago•0 comments

Show HN: Librario, a book metadata API that aggregates G Books, ISBNDB, and more

92•jamesponddotco•8h ago•29 comments

Show HN: Play poker with LLMs, or watch them play against each other

https://llmholdem.com/
103•projectyang•12h ago•48 comments

Show HN: mcpc – Universal command-line client for Model Context Protocol (MCP)

https://github.com/apify/mcp-cli
32•jancurn•4d ago•3 comments

Show HN: GlyphLang – An AI-first programming language

25•goose0004•8h ago•16 comments

Show HN: WinBorg, a beautiful alternative to Vorta for BorgBackup

https://github.com/robotnikz/WinBorg
2•robotnikz•2h ago•0 comments

Show HN: Umaro – An interactive music theory suite for guitarists

https://www.umaro.app/
7•SnowingXIV•4h ago•1 comments

Show HN: Marten – Elegant Go web framework (nothing in the way)

https://github.com/gomarten/marten
10•jackprescott•10h ago•5 comments

Show HN: Hashing Go Functions Using SSA and Scalar Evolution

https://github.com/BlackVectorOps/semantic_firewall
2•BlackVectorOps•5h ago•1 comments

Show HN: I made a memory game to teach you to play piano by ear

https://lend-me-your-ears.specr.net
530•vunderba•1d ago•167 comments

Show HN: HAPI - Vibe Coding Anytime, Anywhere

https://github.com/tiann/hapi
2•weishu•5h ago•0 comments

Show HN: Various shape regularization algorithms

https://github.com/nickponline/shreg
72•nickponline•2d ago•5 comments

Show HN: I made 25 tech predictions and mass-published them

2•JoseOSAF•7h ago•2 comments

Show HN: Executable Markdown files with Unix pipes

120•jedwhite•2d ago•98 comments

Show HN: Yuanzai World – LLM RPGs with branching world-lines

https://www.yuanzai.world/
29•yuanzaiworld•19h ago•5 comments

Show HN: Rocket Launch and Orbit Simulator

https://www.donutthejedi.com/
159•donutthejedi•1d ago•37 comments

Show HN: A website that auctions itself daily

https://www.thedailyauction.com/
41•nsomani•2d ago•18 comments

Show HN: EuConform – Offline-first EU AI Act compliance tool (open source)

https://github.com/Hiepler/EuConform
70•hiepler•1d ago•44 comments

Show HN: Symfreq – Analyse symbol frequencies in code (Rust)

https://github.com/vaskort/symfreq
2•vaskort•8h ago•0 comments

Show HN: Scroll Wikipedia like TikTok

https://quack.sdan.io
321•sdan•1d ago•84 comments

Show HN: Miditui – A terminal app/UI for MIDI composing, mixing, and playback

https://github.com/minimaxir/miditui
64•minimaxir•2d ago•13 comments

Show HN: MCP Server for Job Search

https://github.com/jobswithgpt/mcp
5•sp1982•9h ago•0 comments

Show HN: Airboard – $1 voice dictation for Mac local

https://dhruvian473.gumroad.com/l/pgcjbc
3•mehrad_1•3h ago•0 comments

Show HN: Horizon Engine – C++20 3D FPS Game Engine with ECS and Modern Renderer

https://github.com/jackthepunished/horizon-engine
2•bhdr26k•9h ago•1 comments

Show HN: Human or AI-made song detector and 100% Private Audio Mastering

https://kliga.com
2•aswinsilvadasan•9h ago•2 comments

Show HN: macOS menu bar app to track Claude usage in real time

https://github.com/richhickson/claudecodeusage
157•RichHickson•2d ago•48 comments

Show HN: buse – automate your browser from the terminal

https://github.com/rinvii/buse
2•rinvi•9h ago•0 comments

Show HN: Similarity = cosine(your_GitHub_stars, Karpathy) Client-side

https://puzer.github.io/github_recommender/
167•puzer•4d ago•39 comments
Open in hackernews

Show HN: I used Claude Code to discover connections between 100 books

https://trails.pieterma.es/
309•pmaze•14h ago
I think LLMs are overused to summarise and underused to help us read deeper.

I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.

I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.

On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.

One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

Details:

* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).

* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.

* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.

* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.

* Everything is stored in SQLite and manipulated using a set of CLI tools.

I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/

I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

Comments

Aurornis•11h ago
It’s interesting how many of the descriptions have a distinct LLM-style voice. Even if you hadn’t posted how it was generated I would have immediately recognized many of the motifs and patterns as LLM writing style.

The visual style of linking phrases from one section to the next looks neat, but the connections don’t seem correct. There’s a link from “fictions” to “internal motives” near the top of the first link and several other links are not really obviously correct.

pmaze•10h ago
The names & descriptions definitely have that distinct LLM flavour to them, regardless of which model I used. I decided to keep them, but as short as possible. In general, I find the recombination of human-written text to be the main interest.

There's two stages to the linking: first juxtaposing the excerpts, then finding and linking key phrases within them. I find the excerpts themselves often have interesting connections between them, but the key phrases can be a bit out there. The "fictions" to "internal motives" one does gel for me, given the theme of deceiving ourselves about our own motivations.

reedf1•10h ago
Well even the post itself reads to me as AI generated
wormpilled•10h ago
>A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems

Interesting... seems like it wants the keys on your system! ;)

napolux•10h ago
Monetize it!
joe_the_user•10h ago
A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

It's all fun and game 'till someone loses an eye/mind/even-tenuous-connection-to-reality.

Edit: I'd mention that the themes Claude finds qualify as important stuff imo. But they're all pretty grim and it's a bit problematic focusing on them for a long period. Also, they are often the grimmest spin things that are well known.

drakeballew•7h ago
Don't believe Claude, let's put it that way.
theturtletalks•10h ago
In a similar vein, I've been using Claude Code to "read" Github projects I have no business understanding. I found this one trending on Github with everything in Russian and went down the rabbit hole of deep packet inspection[0].

0. https://github.com/ValdikSS/GoodbyeDPI

dinkleberg•10h ago
That's a cool idea. There are so many interesting projects on GitHub that are incomprehensible without a ton of domain context.
theturtletalks•10h ago
I got the idea from an old post on here called Story of Mel[0] where OP talks about the beauty of Mel's intricate machine code on a RPC-4000.

This is the part that always stuck with me:

I have often felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius.

0. http://catb.org/esr/jargon/html/story-of-mel.html

noname120•8h ago
ValdikSS is the guy behind the SBC XQ patches for Android (that alas were never merged by G). I didn’t expect to see him here with another project!

https://habr.com/en/articles/456476/

https://android-review.googlesource.com/c/platform/system/bt...

smusamashah•10h ago
I dont understand the lines connecting two pieces of text. In most cases, the connected words have absolutely zero connection with each other.

In "Father wound" the words "abandoned at birth" are connected to "did not". Which makes it look like those visual connections are just a stylistic choice and don't carry any meaning at all.

Oras•10h ago
I had the exact same impression.
hecanjog•2h ago
Yes, they look really good but they're being connected by an LLM.
pxc•10h ago
I read a book maybe a decade ago on the "digital humanities". I wish now I could remember the title and author. :(

Anyway, it introduced me to the idea of using computational methods in the humanities, including literature. I found it really interesting at the time!

One of the the terms it introduced me to is "distant reading", whose name mirrors that of a technique you may have studied in your gen eds if you went to university ('close reading"). The idea is that rather than zooming in on some tiny piece of text to examine very subtle or nuanced meanings, you zoom out to hundreds or thousands of texts, using computers to search them for insights that only emerge from large bodies of work as wholes. The book argued that there are likely some questions that it is only feasible to ask this way.

An old friend of mine used techniques like this for dissertation in rhetoric, learning enough Python along the way to write the code needed for the analyses she wanted to do. I thought it was pretty cool!

I imagine LLMs are probably positioned now to push distant reading forward in an number of ways: enabling new techniques, allowing old techniques to be used without writing code, and helping novices get started with writing some code. (A lot of the maintainability issues that come with LLM code generation happily don't apply to research projects like this.)

Anyway, if you're interested in other computational techniques you can use to enrich this kind of reading, you might enjoy looking into "distant reading": https://en.wikipedia.org/wiki/Distant_reading

plutokras•9h ago
> I wish now I could remember the title and author.

LLMs are great at finding media by vague descriptions. ;)

ako•9h ago
According to Claude (easy guess from the wikipedia link?):

The book is almost certainly by *Franco Moretti*, who coined the term "distant reading." Given the timeframe ("maybe a decade ago") and the description, it's most likely one of these two:

1. *"Distant Reading"* (2013) — A collection of Moretti's essays that directly takes the concept as its title. This would fit well with "about a decade ago."

2. *"Graphs, Maps, Trees: Abstract Models for Literary History"* (2005) — His earlier and very influential work that laid out the quantitative, computational approach to literary analysis, even if it didn't use "distant reading" as prominently in the title.

Moretti, who founded the Stanford Literary Lab, was the major proponent of the idea that we should analyze literature not just through careful reading of individual canonical texts, but through large-scale computational analysis of hundreds or thousands of works—looking at trends in genre evolution, plot structures, title lengths, and other patterns that only emerge at scale.

Given that the commenter specifically remembers learning the term "distant reading" from the book, my best guess is *"Distant Reading" (2013)*, though "Graphs, Maps, Trees" is also a strong possibility if their memory of "a decade" is approximate.

pxc•5h ago
After some digging, I think it was likely this one: https://direct.mit.edu/books/book/5346/Digital-Humanities
dangoodmanUT•9h ago
The UI animations are so fun
hising•9h ago
Yeah, I had a similar idea, I used Open AI API to break down movies into the 3 act structure, narrative, pacing, character arcs etc and then trying to find movies that are similar using PostgreSQL with pgvector. The idea was to have another way to find movies I would like to watch next based on more than "similar movies" in IMDb. Threw some hours at it, but I guess it is a system that needs a lot of data, a lot of tokens and enormous amount of tweaking to be useful. I love your idea! I agree with you on that we could use LLM:s for this kind of stuff that we as humans are quite bad at.
lkbm•9h ago
Earlier today, I was thinking about doing something somewhat similar to this.

I was recently trying to remember a portal fantasy I read as a kid. Goodreads has some impressive lists, not just "Portal Fantasies"[0], but "Portal Fantasies where the portal is on water[1], and a seven more "where/what's the portal" categories like that.

But the portal fantasy I was seeking is on the water and not on the list.

LLMs have failed me so far, as has browsing the larger portal fantasy list. So, I thought, what if I had an LLM look through a list of kids books published in the 1990s and categorize "is this a portal fantasy?" and "which category is the portal?"

I would 1. possibly find my book and 2. possibly find dozens of books I could add to the lists. (And potentially help augment other Goodread-like sites.)

Haven't done it, but I still might.

Anyway, thanks for making this. It's a really cool project!

[0] https://www.goodreads.com/list/show/103552.Portal_Fantasy_Bo...

[1] https://www.goodreads.com/list/show/172393.Fiction_Portal_is...

amadeuswoo•9h ago
The feedback loop you describe—watching Claude's logs, then just asking it what functionality it wished it had—feels like an underexplored pattern. Did you find its suggestions converged toward a stable toolset, or did it keep wanting new capabilities as the trails got more sophisticated?
pmaze•9h ago
I ended up judging where to draw the line. Its initial suggestions were genuinely useful and focused on making the basic tool use more efficient. e.g. complaining about a missing CLI parameter that I'd neglected to add for a specific command, requesting to let it navigate the topic tree in ways I hadn't considered, or new definitions for related topics. After a couple iterations the low hanging fruit was exhausted, and its suggestions started spiralling out beyond what I thought would pay off (like training custom embeddings). As long as I kept asking it for new ideas, it would come up with something, but with rapidly diminishing returns.
samuelknight•9h ago
I do this all the time in my Claude code workflow: - Claude will stumble a few times before figuring out how to do part of a complex task - I will ask it to explain what it was trying to do, how it eventually solved it, and what was missing from its environment. - Trivial pointers go into the CLAUDE.md. Complex tasks go into a new project skill or a helper script

This is the best way to re-enforce a copilot because models are pretty smart most of the time and I can correct the cases where it stumbles with minimal cognitive effort. Over time I find more and more tasks are solved by agent intelligence or these happy path hints. As primitive as it is, CLAUDE.md is the best we have for long-term adaptive memory.

timoth3y•9h ago
What meaningful connections did it uncover?

You have an interesting idea here, but looking over the LLM output, it's not clear what these "connections" actually mean, or if they mean anything at all.

Feeding a dataset into an LLM and getting it to output something is rather trivial. How is this particular output insightful or helpful? What specific connections gave you, the author, new insight into these works?

You correctly, and importantly point out that "LLMs are overused to summarise and underused to help us read deeper", but you published the LLM summary without explaining how the LLM helped you read deeper.

rjh29•9h ago
100 books is too small a datasize - particularly given it's a set of HN recommendations (i.e. a very narrow and specific subset of books). A larger set would probably draw more surprising and interesting groupings.
DyslexicAtheist•8h ago
> 100 books is too small a datasize

this to me sounds off. I read the same 8, to 10 books over and over and with every read discover new things. the idea of more books being more useful stands against the same books on repeat. and while I'm not religious, how about dudes only reading 1 book (the Bible, or Koran), and claiming that they're getting all their wisdom from these for a 1000 years?

If I have a library of 100+ books and they are not enough then the quality of these books are the problem and not the number of books in the library?

pmaze•8h ago
The connections are meaningful to me in so far as they get me thinking about the topics, another lens to look at these books through. It's a fine balance between being trivial and being so out there that it seems arbitrary.

A trail that hits that balance well IMO is https://trails.pieterma.es/trail/pacemaker-principle/. I find the system theory topics the most interesting. In this one, I like how it pulled in a section from Kitchen Confidential in between oil trade bottlenecks and software team constraints to illustrate the general principle.

timoth3y•8h ago
Can you walk me though some of the insights you gained? I've read several of those books, including Kitchen Confidential and Confessions of an Economic Hit Man, and I don't see the connection that the LLM (or you) is trying to draw. What is the deeper insight into these works that I am missing?

I'm not familiar with he term "Pacemaker Principle" and Google search was unhelpful. What does it mean in this context? What else does this general principle apply to?

I'm perfectly willing to believe that I am missing something here. But reading thought many of the supportive comments, it seems more likely that this is an LLM Rorschach test where we are given random connections and asked to do the mental work of inventing meaning in them.

I love reading. These are great books. I would be excited if this tool actually helps point out connections that have been overlooked. However, it does not seem to do so.

gchamonlive•7h ago
> we are given random connections and asked to do the mental work of inventing meaning in them

How is that different from having an insight yourself and later doing the work to see if it holds on closer inspection?

delusional•7h ago
Don't ask me to elaborate on this, because it's kinda nebulous in my mind. I think there's a difference between being given an insight and interrogating that on your own initiative, and being given the same insight.
gchamonlive•7h ago
I don't doubt there is a difference in the mechanism of arriving at a given connection. What I think it's not possible to distinguish is the connection that someone made intuitively after reading many sources and the one that the AI makes, because both will have to undergo scrutiny before being accepted as relevant. We can argue there could be a difference in quality, depth and search space, maybe, but I don't think there is an ontological difference.
fwip•5h ago
The one that you thought of in the shower has a much greater chance of being right, and also of being relevant to you.
varenc•3h ago
> Can you walk me though some of the insights you gained?

This made me realize that so many influential figures have either absent fathers, or fathers that berated them or didn't give them their full trust/love. I think there's something to the idea that this commonality is more than coincidence. (that's the only topic of the site I've read through yet, and I ignored the highlighted word connections)

Aurornis•7h ago
I like design that highlights words in one summary and links them to highlights in the next. It's a cool idea

But so many of the links just don't make sense, as several comments have pointed out. Are these actually supposed to represent connections between books, or is it just a random visual effect that's suppose to imply they're connected?

I clicked on one category and it has "Us/Them" linked to "fictions" in the next summary. I get that it's supposed to imply some relationship but I can't parse the relationships

8organicbits•9h ago
Can someone break this down for me?

I'm seeing "Thanos committing fraud" in a section about "useful lies". Given that the founder is currently in prison, it seems odd to consider the lie useful instead of harmful. It kinda seems like the AI found a bunch of loosely related things and mislabeled the group.

If you've read these books I'm not seeing what value this adds.

Closi•9h ago
I guess the lies were useful until she got caught?
irishcoffee•7h ago
Why lie if it isn’t useful? Lying is generally bad, why do a generally bad thing if there isn’t at least a justification, a “use” if you will.
Terretta•5h ago
Thanos is the comic book villain snapping his fingers.

Theranos is the fraud mentioned in the piece.

urbandw311er•9h ago
This feels like a nice idea but the connection between the theme and the overarching arc of each book seems tenuous at best. In some cases it just seems to have found one paragraph from thousands and extrapolated a theme that doesn’t really thread through the greater piece.

I do like the idea though — perhaps there is a way to refine the prompting to do a second pass or even multiple passes to iteratively extract themes before the linking step.

amelius•9h ago
Makes me wonder, how well could an LLM-based solution score on the Netflix prize?

https://en.wikipedia.org/wiki/Netflix_Prize

(Are people still trying to improve upon the original winning solution?)

bonkusbingus•9h ago
"There are, you see, two ways of reading a book: you either see it as a box with something inside and start looking for what it signifies, and then if you're even more perverse or depraved you set off after signifiers. And you treat the next book like a box contained in the first or containing it. And you annotate and interpret and question, and write a book about the book, and so on and on. Or there's the other way: you see the book as a little non-signifying machine, and the only question is "Does it work, and how does it work?" How does it work for you? If it doesn't work, if nothing comes through, you try another book. This second way of reading's intensive: something comes through or it doesn't. There's nothing to explain, nothing to understand, nothing to interpret." — Gilles Deleuze
drakeballew•7h ago
I am not familiar with the source of this quote, but I don't disagree, it is just incredibly reductive. Gilles Deleuze him-/her-self was not born and did not live in a vacuum. They were influenced and mimetically reproduced ideas they were exposed to, like we all do. I don't find the point of this project meaningless myself. The opposite in fact. But the results are not accurate for anyone who has actually read any of these texts.
tolerance•9h ago
I don’t like this product as a service to readers (i.e., people who read as a cognitive/philosophical exploit) but I do think that somewhere embedded in its backend there are things of benefit.

I think that this sucks the discreet joy out of reading and learning. Having the ways that the topics within a certain book can cross over in lead into another book of a different topic externalized is hollowing and I don’t find it useful.

On the other hand I feel like seeing this process externalized gives us a glimpse at how “the algorithms” (read: recommender systems) suggest seemingly disjunctive content to users. So as a technical achievement I can’t knock what you’ve done and I’m satisfied to see that you’re the guy behind the HN Book map that I thought was nice too.

At its core this looks like a representation of the advantages that LLMs can afford to the humanities. Most of us know how Rob Pike feels about them. I wonder if his senior former colleague feels the same: https://www.cs.princeton.edu/~bwk/hum307/index.html. That’s a digression, but I’d like to see some people think in public about how to reasonably use these tools in that domain.

mathgeek•8h ago
> Having the ways that the topics within a certain book can cross over in lead into another book of a different topic externalized is hollowing and I don’t find it useful.

Intuitively, I agree. This feels like the different between being a creator (of your own thoughts as inspired by another person's) and a consumer (although in a somewhat educational sense). There would need to be a big advantage to being taught those initial thoughts, analogous to why we teach folks algebra/calculus via formulas rather than having every student figure out proofs for themselves.

sciences44•9h ago
Love the originality here - makes you curious to explore more.

Solid technical execution too. Well done!

jereees•8h ago
now do this for research papers! fun stuff :)
miracoli•8h ago
wow I hope the bubble pops soon.. now that you discovered books with AI that was illegally trained on them, how about reading them?
nephihaha•7h ago
I'm not sure I understand what the connections are exactly, or whether they go much deeper than certain words and phrases.
only-one1701•7h ago
I'm really not trying to be mean, but one of the things we learn in the humanities is that basically any two texts can be connected via extremely broad statements (e.g. "Perfect is the enemy of the good"). This is like the joke on twitter about how every couple of years someone in tech invents the concept of public transportation.
johnwatson11218•8h ago
I did something similar whereby I used pdfplumber to extract text from my pdf book collection. I dumped it into postgresql, then chunked the text into 100 char chunks w/ a 10 char overlap. These chunks were directly embedded into a 384D space using python sentence_transformers. Then I simply averaged all chunks for a doc and wrote that single vector back to postgresql. Then I used UMAP + HDBScan to perform dimensionality reduction and clustering. I ended up with a 2D data set that I can plot with plotly to see my clusters. It is very cool to play with this. It takes hours to import 100 pdf files but I can take one folder that contains a mix of programming titles, self-help, math, science fiction etc. After the fully automated analysis you can clearly see the different topic clusters.

I just spent time getting it all running on docker compose and moved my web ui from express js to flask. I want to get the code cleaned up and open source it at some point.

ct0•7h ago
This sounds amazing, totally interested in seeing the approach and repo.
hellisad•6h ago
Sounds a lot like Bertopic. Great library to use.
mannanj•8h ago
Seems like a lot of successful leaders have a history of or normalize deception and lying for some benefit.
itsangaris•8h ago
surprised to that "seeing like a state" didn't get included in the "legibility tax" category
JimmyJamesJames•8h ago
Like this initial step and its findings.

#1: would a larger dataset increase the depth and breadth of insight ( go to #2) #2: with the initial top 100, are there key ‘super node’ books that stand out as ones to read due the breadth they offer. Would a larger dataset identify further ‘super node’ books.

only-one1701•7h ago
This is an IQ test lol
chromanoid•7h ago
> A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

I really appreciate you mentioning this. I think this is the nature of LLMs in general. Any symbol it processes can affect its reasoning capabilities.

dev_l1x_be•7h ago
Claude code is good for arranging random things into categories, with code, configuration and documentation files it is barely goes into random rabbit holes or hallucinates categories for me.
lisdexan•7h ago
Finally, Schizophrenia as a Service (SaaS).
drakeballew•7h ago
This is a beautiful piece of work. The actual data or outputs seem to be more or less...trash? Maybe too strong a word. But perhaps you are outsourcing too much critical thought to a statistical model. We are all guilty of it. But some of these are egregious, obviously referential LLM dog. The world has more going on than whatever these models seem to believe.

Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.

what-the-grump•5h ago
Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc.

… realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.

You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.

electroglyph•1h ago
unrelated in any way? that's not normal. have you tested the model to make sure you have sane output? unless you're using sentence-transformers (which is pretty foolproof) you have to be careful about how you pool the raw output vectors
liqilin1567•4h ago
When I saw that the trail goes through just one word like "Us/Them", "fictions" I thought it might be more useful if the trail went through concepts.
rtgfhyuj•48m ago
give it a more thorough look maybe?

https://trails.pieterma.es/trail/collective-brain/ is great

eloisius•27m ago
It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.
jgalt212•6h ago
What did it say about who wrote To Kill a Mockingbird?
adsharma•5h ago
This is GraphRAG using SQLite.

Wouldn't it be good if recursive Leiden and cypher was built into an embedded DB?

That's what I'm looking into with mcp-server-ladybug.

hecanjog•5h ago
You really know what a good interface should be like, this is really inspiring. So is the design of everything I've seen on your website!

I won't pile on to what everyone else has said about the book connections / AI part of this (though I agree that part is not the really interesting or useful thing about your project) but I think a walk-through of how you approach UI design would be very interesting!

threecheese•4h ago
Where did you come across Leiden partitioning? I’m facing a similar use case and wonder what you’re reading.
guidoism•3h ago
Nice! I've been using Claude Code and ChatGPT for something similar. My inspiration is Adler's concept of The Great Conversation and Adler's Propædia. I've been able to jump between books to read about the same concept from different author's perspectives.
typon•2h ago
The website design and content are much nicer than the "ideas" here. Just standard LLM slop once if you actually have read some of these books.
pharrington•1h ago
Please don't give yourself LLM-induced psychosis.
chrisgd•48m ago
Really great work but have to agree with others that I don’t see the threads.

The one I found most connected that the LLm didn’t was a connection between Jobs and the The Elephant in the Brain

The Elephant in the Brain: The less we know of our own ugly motives, the easier it is to hide them from others. Self-deception is therefore strategic, a ploy our brains use to look good while behaving badly.

Jobs: “He can deceive himself,” said Bill Atkinson. “It allowed him to con people into believing his vision, because he has personally embraced and internalized it.”