Feedback, suggestions or contributions are very welcome! :)
1. Right now, working on standing up an MCP server in Java. Not using the Spring Boot support at the moment, but rather setting up embedded Tomcat and doing it the more "low level" way just for didactic purposes. I'm sure I'll use Spring Boot once I get deeper into all of this.
2. Plowing through the "AI Agents in Action" book. I'm just wrapping up the section on AutoGen and about to move into crew.ai stuff.
3. Reading a book on Software Product Line Engineering.
4. I have an older project that's Grails based that I let linger without any attention for a really long time. I'm working on updating it to run on the latest Grails and Java versions and also writing some automated smoke tests.
In the past few months, I've finally started working on a basic replacement NVR that works for me: https://github.com/AlbinoDrought/creamy-nvr
Like many video projects, it's a glorified ffmpeg wrapper :)
Replaced my Synology surveillance station since 2023, and has been running great. I also have a Google Coral for the image processing, but this is optional.
1) a note-taking workflow in Obsidian (you take bite-sized notes about a topic, then connect "prerequisite" notes in Obsidian's canvas editor)
2) a tool that uploads each note and graph data to a database
3) a webapp that presents those notes algorithmically using spaced repetition. This enables you to allow others to "traverse" your note graph in a guided and self-paced manner.
You can add "challenge presets" to each note so that your mastery of each piece of knowledge can be tested with simple flashcards, multiple choice, free response, or some visual/actionable task to force active recall. An algorithm uses your success rate and spaced repetition data to introduce & drill more advanced notes into your long term memory.
Here's some more reading I was inspired by:
https://www.mathacademy.com/pedagogy
https://www.justinmath.com/individualized-spaced-repetition-...
Even if there are a lot of imperfections and flaws about this project (like the sheer difficulty of curating a good knowledge graph to begin with), I'm hoping to make my note-taking in Obsidian more structured and thorough, replace my Anki routine, and make any of my notes into an automated + algorithmic course. If someone has another similar project (combining note-taking with hierarchal, topological knowledge graphs with spaced repetition and testing all in one platform) I would love to hear more about your approaches. Quick shoutout to one person I've seen who is doing something similar: https://x.com/JeffreyBiles/status/1926639544666816774
When I was looking for a job last summer, I got frustrated with the current resume builders on the market and decided to build one exactly how I wanted to use it.
- No signup, no login, and no payment.
- Suggest a professional summary (with highlighting) to match a job description [0].
- Preview as you go.
- ATS friendly templates.
- Find relevant jobs for my resume.
[0] Recruiters skim through resumes, and highlighting the keywords they look for has always helped me to get their attention, so I decided to implement this feature using AI.
Fits into my “loudness series” suite of tools.
Have 3 more in development and then it’ll be on to the next series.
- https://padsnap.app/ : PadSnap is a simple web app that adds customizable padding to your images so they fit Instagram’s/custom dimensions — no cropping, no quality loss. All on browser, no server uploads. Also no ads or login.
- https://shiryakhat.net/ : redid my podcasts website last week: Shir Ya Khat podcast, which translates to "Head or Tails" in Farsi, began its non-profit journey in 2016 with a mission to make blockchain and cryptocurrency technical knowledge accessible to Farsi speakers worldwide.
- life timetime visualizer, still WIP, feedback welcome: https://shayanb.github.io/timeline/
Reflect - an app to track anything and analyze your data, including a feature to run self-guided experiments [0]
Later - an app to schedule non-urgent tasks and ideas, with an SRS-like scheduler to punt items [1]
[0] https://apps.apple.com/us/app/reflect-track-anything/id64638...
[1] https://apps.apple.com/us/app/later-set-intentions/id6742691...
So I put together a simple Digital Asset Manager (DAM) where:
* Images are uploaded and vectorized using CLIP
* Vectors are stored in Lance format directly on Cloudflare R2
* Search is done via Lance, comparing natural language queries to image vectors
* The whole thing runs on Fly.io across three small FastAPI apps (upload, search, frontend)
No Postgres or Mongo. No AI, Just object storage and files.
You can try it here:
Or see the code here:
* https://github.com/gordonmurray/metabare.com
Would love feedback or ideas on where to take it next — I’m planning to add image tracking and store that usage data in Parquet or Iceberg on R2 as well.
So I'm giving a try to a project which started with marketing. No implementation, just a TikTok to see if people like it. And holy crap, we got 75k views!
The new idea [2] is easier to explain (1 pushup = 1 minute of scrolling) and already has a community. Plus, not working alone helps me focus on what I'm good at: programming. I don't regret learning about other areas but doing marketing for a living is not my thing.
I'm not getting rid of SpeedBump, though. It's a fun side project and it does help people :)
It doesn’t feel there yet, but starting to seem some workflows could be close. And non-technical folks at business are starting to pay attention and want projects moving in those areas.
I redesigned the home page today itself. Any feedback is appreciated!
Along the lines of predictionbook, metaculus - something that helps you be "well calibrated", but more playful/fun than metaculus.
It doesn't have a lot of upside - predictionbook actually went offline due to lack of interest. But it was a good excuse to try out some vibe coding, and learn react native (I've mostly been a backend programmer).
In an attempt to make it more engaging and fun, I decided to have it focus on sports picks. Also partly because calibration graphs need to have a lot of predictions to yield any reliable information about your calibration.
I got it up in time for March Madness and about 25 of my friends joined and it was a good time. I nagged and reminded them a lot about about 15-20 of them predicted all 63 games, by picking the winner of each match and what their percentage confidence was. I had a leaderboard and live-blogged and gave silly awards.
I later added support for multiple "tournaments" and currently have tournaments going for NBA Playoffs and NHL Playoffs, but interest is waning. Of my friends, only 2-3 others are still regularly predicting.
Maybe it'll be more fun for the NFL season but I might also let it go a bit dormant.
Biggest challenge is that there isn't really a bulletproof way to rank people if people only predict some games in a tournament. I've tried all sorts of things, minimum # of games, bayesian kernel smoothing, but it's ultimately arbitrary when choosing how to penalize someone for not participating.
If I were to continue I'd be looking at things like automatically integrating with sports apis and odds/bookmaking apis, allowing users to create their own tournaments, etc. But ultimately, the UX of the site isn't much more than making a prediction, and then checking back later when the game is over to see your score. Not much more reason to hang around on the site than that.
Also having fun one-shoting or few-shoting, little games and interactives:
Edit: I see you are using MaxMind database - do you add some sort of additional analytics or overlay on top of that?
I think many version managers make things unnecessarily difficult, especially if one hops from one repo to another. vmatch automatically uses and installs the right versions.
As well as been playing with creating plastic keys using a flatbed scanner with the printer.
What I have is a basic flash card app with double sided cards (for writing (i.e. drawing) the kanji, and reading). What sets it apart is that each card contains all the relevant dictionary data, and users are encourage to bookmark a couple of words to help them remember the writing or the reading of the kanji.
What I am working on now is the database backup/sync system. I store all the user’s progress in indexeddb store in their local browser. To sync I am writing a simple patch system, so they can pick a remote somewhere (e.g. a gist on github) and push their latest patches, when syncing progress I would check the hash of the patch and apply the relevant patches.
After that I am planning on turning it into a progressive web app so users can download the app onto their devises.
I've been building something similar for Chinese, just for myself: https://hazel.daijin.dev/ It's got PWA, let me know if you want my presets for working with PWA with Vite.
Will definitely be taking a few pages out of your (app) when I get a chance!
1) Setup Apache https://github.com/marchildmann/IDS-Scripts
2) Setup MLX and MLX-LM Finished by tomorrow
3) Working on a micro PHP framework to instantly deploy an API, connect a database and have a basic middleware
Have you thought about using the landing page itself as a demo? I.e. to allow users to post voice messages on your main page. Would at least be intriguing.
A wordle-like game based on a road trip game my friends and I used to play. It serves you up a mashup of two different movie plots, and you have to guess the combined movie title. There's always some sort of shared word or wordplay between the two movie titles.
An example from the tutorial: the day after tomorrow never dies.
- Manages the entire range of personal (and maybe business) information/content: Documents, Media, Messages (email, instant, etc.), Contacts, Bookmarks, Calendar, etc.
- Is tag based, so that where to put and find content is easy to answer. Think of a set of flat folders, on one or more devices, within which the files are stored with tags attached. Since people often find navigating/browsing files more natural than searching, virtual folders will be dynamically generated to guide navigation. Also, entire folders can be treated as atomic, and tagged/managed as one object (useful for repositories & projects). And, heuristics (and maybe AI) will be used to automatically tag files when they are imported into the tool, greatly reducing the tedium of adding tags.
- Is file based, so that all information is physically stored as individual files. This allows information to be more easily managed on a physical level: moved around, backed up, exported/imported, searched, navigated, etc. So in addition to docs, each email/instant message, contact, scheduled task/event, bookmark, etc. would ultimately be stored as a file, unlocking all the things you can do with files.
- Has a local web-based UI launched from a local agent, so actual file content does not usually need to move across the network and stays local, and the tool is also easily multi-platform, with consistent UI irrespective of platform.
- Provides a cloud web UI as well, that communicates with content devices through the local agent, so that content stored across multiple devices can be managed in one central location, even without direct access to those devices, team/org features can be provided. However, file content still stays local, except when shared.
- Provides tools for exporting data as file from the data islands of various apps and service, and backing up as files to cloud storage services.
My vision is a situation where I am in charge of my own data irrespective of whatever device, app, or service I use, can ensure that it is always available and will not be lost, and that I can easily navigate and search through it all to find whatever I want, no matter how scattered and massive it is.
[1] Here are some of my issues with personal information management affordances of current tech, which is driving me to work on a solution:
- Our data is too bound to device and vendor islands. Can't easily move my information across Apple/Google/Whatsapp, etc accounts. Can't easily merge and de-duplicate either. I almost always somehow lose data whenever I have to move to a new phone, etc.
- Hard to own your data on many services: Discord, Slack, etc. Can't easily export, search.
- Hard to have a 360 overview and handle on all your data assets and query them in a consistent manner.
- Files as a unit of information storage and management is very ergonomic; we shouldn't allow that concept to be buried by vendors for their own gain.
- Nuclear Reactor Starter Kit --- an open source set of procedures, processes, templates, and maybe even some IT advice that should help newcomers start companies with nuclear quality assurance programs easily and quickly while also making a new format in which nuclear companies can share lessons learned in efficiency.
- Reactor Database --- similar to the iaeas PRIS but focused on reactor development rather than power reactors. Will include nuclear startup company tracking with details gleaned from statements and maybe extrapolated where necessary from simple simulations. Will include things like fuel cost and licensing progress. This way people can more easily separate vaporware from real nuclear, and keep track of promises vs delivery.
Have fun!
Also - love whatisnuclear.com! About 10 years ago, I tried my hand at creating a generalized JS-based viz system (see examples in https://github.com/ahd985/ssv), but could never figure out a market/path forward for it.
Open source Mac-native menu bar app for speech to text using GPT-4o-transcribe (current STT SOTA)
You can build webapps very quickly, especially AI-enabled ones, and deploy them on a subdomain. Other users can sign up and use your webapp, and any tokens they use will be billed to them and you will get a large cut (80%) of the margin earned on the tokens billed - as I bill 2x OpenAI API token costs to create this margin.
So ideally you can validate your idea by rapidly building a prototype and evening earning revenue to boot.
Tritium is an IDE for corporate lawyers. Draft Word docs, review PDFs, redline all in a single application. It's written in Rust using a modified version of egui. Immediate mode has some interesting tradeoffs that I'd love to discuss on here. Also the web/desktop dichotomy presents a lot of interesting opportunities and challenges where data governance is concerned. I'd love your thoughts or to share mine!
Getting it onto the desktop is the big challenge for the moment!
I have tens of thousands of ads in the collection and it would take me many lifetimes to complete, but I've been using AI to extract and catalog the meta data. I can get through about 100 ads/day this way.
One of my favorite ads, a computer from 1968 that "answers riddles": https://adretro.com/ads/1968-digi-comp-digi-comp-1-table-top...
On one hand the idea seems so simple and intuitive (Define patterns (like 3 red blocks to the right), combine patterns ( 5 up * 3 red right), use patterns inside patterns (each block is a square), but implentation wise I keep running into so many intracies and I want it to be perfect so it's been kind of tough and slow.
Really hoping to get some early feedback on this tool, I've been using it for two production sites for about a week now and I've already discovered (at work) that we've had the 2nd largest user signup day, and that we deployed a change that inaccurately tracked a specific metric. Check it out at https://querycanary.com
A binary static analysis tool that identifies vulnerabilities.
Right now, still just focused on buffer overflows. It can find some known CVEs and I’ve made several reliability improvements over the past month or so.
I think I’m going to expand to additional vulnerability types soon.
now i'm starting on adjusting the model to include the liquid water ocean underneath the shell and observe the effect of changing viscosity gradients in the equilibration of the ocean and ice shell, as well as adding in compositional impurities (chloride brines) and tidal heating effects.
[0] https://blog.walledgarden.ai/2025-05-20/wabbit-s2-welcome-to...
[1] https://www.sesame.com/research/crossing_the_uncanny_valley_...
While it may sound counterintuitive, the agents of today aren’t truly autonomous in that you need to really guide them and plan their actions well.
I believe this is true today, and will be even more true when agents are guiding agents.
We need new infrastructure for dynamic context management.
The answer is not as simple as “hook up your agent to an MCP that pull docs from the web” … also MCP needs its own revolution. I tend to use no MCP and prefer raw agent performance.
I’m evolving the simple concepts I built in my VS Code extension to address this. Nothing public now, but I and a few others use this everyday to feed parts of large codebases into Gemini (to build plans for Claude code, other coding agents): https://github.com/backnotprop/prompt-tower
Planning to have a first testing session some time next month. Really excited but still lots to go!
Do you have a website I can follow?
Unless I'm missing something, it's amazing how few free, _high quality_ materials are online.
Ultimately I'm interested in two things: genuinely fun games that make you do some maths, and quality visualisations that help make concepts easier to learn
I've also used the data corrections submitted by users to contribute over 3,000 edits back into OSM!
It's a Typescript library that allows you to wrangle structured outputs from LLMs and pipe them to programmatically useful control flow or structured data.
It's been 6 month since our first appearance on Show HN [1], and I'm working with first free users on bugs, improved workflows and UX, geocoding, solver features, future mobile app etc. etc.
We officially crossed the limits of 1500 stops per optimization with some waste collection guys, all still running fully client-side in the browser.
Most safari booking sites are either outdated, opaque on pricing, or offer one-size-fits-all tours. We let travelers customize everything — dates, interests (e.g. big cats, birding, photography), travel style, and budget — and generate a full itinerary with lodge picks, activity suggestions, and accurate cost estimates (including seasonal pricing and transportation).
We also partnered with local operators so users can actually book what they see — not just get ideas. The goal is to make safaris more accessible and planning less overwhelming.
Still early, but if you're curious or planning a trip to Africa, I'd love feedback: https://greatriftsafari.com
Checkout how posthog did it [1], it's an interesting approach. Having something that can support both devs and content folks (non technical) is great. It is easy to get bogged down in building the website and reinventing bunch of wheeels, instead of focusing on the product & content, esp in smaller teams.
[1] How PostHog built a community forum, roadmap and changelog on Strapi https://strapi.io/user-stories/posthog
This game was originally inspired by the game "Untrusted" (https://github.com/AlexNisnevich/untrusted)
I have a software engineering background, and I’ve been working on this for nearly 3 years now! I used to play with the Electra One controller before, but having the encoders over the display is really something I’ve always wanted.
I presented Karl last month at Superbooth (a fair in Berlin) and got really good feedback. After 6 months of beta and 2 years of touring with it myself, the first batch will be dispatched in August, and this is quite exciting!
The approach is to perform system scanning using a combination of LLMs and traditional algorithms to dynamically populate a Datalog knowledge base. The facts of the program are constrained to a predefined “model schema” of sorts and a predefined set of rules that encode specialized domain knowledge of how new facts can be derived from known facts.
We generate proof trees / attack graphs from the knowledge base and queries posed to it. The attack graph uses big-step semantics to plan and guide the execution flow, and the system dispatches to agents with tool use to fill in the details and implement the small-step semantics, so to speak. This may include API calls to a Metasploit Framework server or RAG over vulnerability and exploit databases.
We use Pydantic AI to constrain the LLM output to predefined schemas at each step, with a dash of fuzzy string matching and processing to enforce canonicalization of, e.g., software names and other entities.
Tl;dr: neurosymbolic AI research tool for cybersecurity analysis and pentesting.
Backend is already working: Boldaric https://github.com/line72/boldaric
And a simple iOS native front end (which I haven’t submitted to the App Store yet). Tor Jolan https://github.com/line72/torjolan
It has been interesting tweaking the algorithms and models for various similarity searches.
I really like that it focuses on music characteristics and not metadata, so popularity of a song/artist isn’t even taken into account. This has really helped me explore my rather large music collection especially when I get stuck in a rut of listening to the same things.
It's a file explorer where it embeds your local file structure so you can use natural language to search your file system.
Started off as a local inference/vector-db only project last year and now also using cloud inference/vector-dbs for faster processing.
You can also use "agent-mode" to organize your files/folders, create items, move, copy and save content to disk directly from chat.
https://github.com/jaronilan/stories/blob/main/Base%20Rate.p...
Will now move at the usual snail pace to write the next one.
I tried self-hosting Sentry recently and found out there are a lot of moving parts, which makes sense for their scale and use case.
I was wondering if I could build something small and not multi-tenant. So I started experimenting with writing a server (in Go) that collects OpenTelemetry data and inserts into Clickhouse, an API for retrieving data/statistics (P95 in a time range, etc...), and a frontend (React.js) that displays them. All of this in a single executable file (yes, including the frontend, but not including Clickhouse).
This is all very new to me so I'm learning Go, Clickhouse and OpenTelemetry at the same time.
It allows us to control the algorithm. It’s all LLM translating to YouTube search queries under the hood.
Visually it looks the same. The suggested videos come from predefined buckets on topics they love.
E.g. 33% fun math, 33% DIY engineering, 33% creative activities.
Video recommendations that have a banned word in the title/desc don't get displayed e.g. MrBeast, anything with Minecraft in it, never gets surfaced.
For anyone interested in using it, send me an email.
I'll put you on my list. And you can contribute ideas to our community Google Doc.
jim.jones1@gmail.com
Anyway, a long way of saying awesome - would love to be on your list. I'll send you an email separately.
For the last few weeks, we have been working on catching up on features for vibe coders (prompt -> project), but now we are back to our strengths (visual editor and new beautiful UI libraries for Tailwind CSS, Bootstrap, and more).
We realized there are just too many apps for vibe coders, and it would be better to work on something unique that we are really good at!
The idea is to improve the tooling to work with grammars, for example generating railroad diagrams, source, stats, state machines, traces, ...
On both of then select one grammar from "Examples" then click "Parse" to see a parse tree or ast for the content in "Input source", then edit the grammar/input to test new ideas.
There is also https://mingodad.github.io/plgh/json2ebnf.html to generate EBNF for railroad diagram generation form tree-sitter grammars.
Any feedback, contribution is welcome !
I started with a basic syntax for expressions and specified a lot up front such as it being a bytecode interpreter and using a recursive descent parser.
I found building it up feature by feature to be much more effective than one shotting an entire feature rich language. Still there was a lot of back and forth.
Only 1 commit :/ Would love to see the prompts and how you iterated on this
I've found it really satisfying to solve the data challenges that come along the way, from "where on earth could this data come from" to collecting, storing, parsing, validating and serving constantly. It's also - by nature - something that's never going to be "done". There's always something to improve. I love it!
We now offer more types of data (ASN/whois, proxy/threat detection, so on) than most other providers, more accurate and more frequently updated, at a tenth of the cost, which is something I'm really happy about.
For anyone interested, you can make 1,000 requests day free, or reach out if you have an open source/public interest project for an unlimited key or access to the data.
I'd also love to hear any suggestions for additional data types to add.
It’s an alternative to Electron/Tauri that uses Bun.
It has a bsdiff based update mechanism that lets you ship updates as small as 4KB, a custom zstd self extractor that makes your app bundle as small as 12MB, and more.
I’m currently working on adding Windows and Linux support.
In practice, writing journal entries about why I can't seem to get myself to make all these French cleats that I supposedly need.
Also some software stuff.
here the games result so far: https://playcraft.fun
I'm curious, why electric motors vs a solid rocket motor? Volatility? Control over thrust? Making it safe to throw without worrying about backblast?
CLI Meshtastic flasher that works well. No internet mesh networking sounds awesome; just the bandwidth is extremely limited
(I made a small newsletter sign-up form, feel free to join the wait list for betas and a free e-Book!)
Also another fun idea I want to try is to let the Claude design a new programming language, i.e. where the AI makes all the decisions and goal-settings and I just help it instead when it's stuck.
- mach9poker.com: incorporated startup developing a poker tournament training app for novices and unprofitable players. Looking for UX/product designer co-founder.
- policyimpact.org: A journalism site for highly vetted articles responding to actions of the current U.S. administration and other import political vectors.
- sharpee: a new interactive fiction platform built in Typescript
- bsky.poker: root domain for poker community to have nice handles on BlueSky
Happy if anyone wants to pitch into any of these projects.
All I really want to do is be able to clip/save articles (and maybe generate transcripts from videos) from my phone or computer, read them in KOReader on a Boox tablet, and then export them and my eBook notes into Logseq, but every time I think I have it figured out, some project pulls a rug out from under me and I end up back at the drawing board.
* prfrmHQ SaaS The modern way to manage performance reviews, set clear objectives, and ensure alignment across teams or individually — all in one place
see https://news.ycombinator.com/item?id=43538744 [Show HN: My SaaS for performance reviews setting goals and driving success] https://youtu.be/ygvKdgiKRj4?si=Q9ael-oCLEGKMIgN - Shows I can use AI and I've integrated into AWS Bedrock
- Shows I can integrate with Stripe for payments
* Consulting (Architecture, Strategy, Technology leadership and advisory) - I'm working on getting my consultancy started. If anyone wants the kind of skills I offer here let’s talk https://architectfwd.com
* Next SaaS - Starting a SaaS for managing core strategy and technology concepts.
I'm building Infinite Pod, a web app that generates language learning podcasts tuned to your individual learning goals and level.
It's based on the principle of language acquisition through comprehensible input, as described here: https://www.youtube.com/watch?v=NiTsduRreug
It's still a bit rough, but feels magical in my own testing so I wanted to make it available to others.
I am hoping this will be the way in which I write most of my future scripts and projects.
[1]: https://luau.org/
I was working on world models / generative environments but without the training data available as an independent researcher, ended up focusing on building with existing geospatial data.
The same architecture of the '24 Genie paper's dynamics model is instead trained on historical data for risk analysis and creating a heatmap in the 2d map. I'll try to adapt this for a more generalizable urban mobility model as well.
Decided to do an extended sabbatical after being part of one of the many tech layoffs the last years, and I'm thus working on things I like, instead of things that pay..
Collecting and cataloging craft beer venues from around the world, at https://wheretodrink.beer Still a WIP, and it's not trying to be the most extensive list, but I want it to be a substantial list. Once I reach a certain maturity in the data I'll probably look to spawn minor projects off from the data set.. have a couple ideas already that I'll just keep to my self for now :D
I also had a set of left over domains relating to beer that I'm offering up for use with BlueSky handles, and beer related link pages at https://drnk.beer - a bit on the back burner.
We’re solving the problem of “How can agentic AI interface with legacy and existing business systems.” - if you’ve got a boring job and are tired of filling out forms in business software or swapping between 10 different systems, convince management to let us come and have AI do it for you.
Build tools are generally an un-sexy field, and JVM build tools perhaps doubly so. But Mill demonstrates that with some thought put into the design and architecture, we can speed up JVM development workflows by 3-6x over traditional JVM tools like Maven or Gradle, and make it subjectively much easier to navigate in IDEs and extend with custom logic.
If you're passionate about developer experience and work on the JVM, I encourage you to give Mill a try!
A totally bootstrapped, professional services undertaking with no investors needed. The value is in the knowledge acquired over a decade plus in sales support roles and learning about an underserved, viable market.
Initially I thought there is a use-case in finance, but the barriers of entry are incredibly small and the value add is not that large.
Currently, there seems to be a lot of traction in code generation (Cursor, Lovable, et al), but I have not seen that work on a useable code base/workflow.
This is why tools like cursor work so great, they’re able to work in a super tight feedback loop with the compiler, linter and tests. They operate in a super well-known, documented environment.
If we can replicate the same thing on business systems… that’s when the magic happens - just very hard to do without deep knowledge of those platforms and agentic AI because everyone does stuff differently in each org. The overlap of people with skills in both AI and specific business ops areas is absolutely tiny.
An example of where we’re using this is in a fully AI native CRM (part of SynthGrid - see https://mindfront.ai). We don’t even have any way to interface with it outside of AI, but we’d also never want to do so again anyway because the efficiency gains are so huge for us.
The Pareto frontier will continue to inexorably advance forward, dragging even the complex or non-standardized domains in with it. For those tightly integrated business systems, we’ll probably see huge gain in utility, if not function, from the improved underlying models combined with the excellent tools. Be sure to try out Claude 4 Opus hooked into some systems if you haven’t already!
Serverino is a small, fast, and dependency-free HTTP server implemented in D. A minimal app with serverino can handle on my laptop ~150k reqs/s and it uses just a few mb of ram.
the process of creating APIs for testing and automation should be as easy possible. the tools that exist nowadays aren't good enough, they require you to use their programming language of choice or complex procedures for a task that should be simple. I built mock to try to solve that and still continue to maintain it.
A bit of a janky setup, but I've mostly gotten it to do what I want it to do after some head scratching.
The poetry one is react native. Art and philosophy ones are swift/kotlin. I wanted to see if you could use LLMs to effectively create a cross-platform app. The idea behind react native was that you write it once in an approachable language, then the framework compiles to native app code. In 2025, the approachable language you code in is English, and the LLM now generates native app code.
It was generally a success and I feel less of a need of the development overhead of react native these days.
https://apps.apple.com/gb/app/for-arts-sake/id6744744230
https://apps.apple.com/gb/app/daily-philosophy/id6472272901
https://apps.apple.com/gb/app/the-poetry-corner/id1602552624
[1] https://codeberg.org/gudzpoz/Juicemacs/src/branch/main/elisp
[2] https://www.graalvm.org/latest/graalvm-as-a-platform/languag...
[3] https://www.gnu.org/software/emacs/manual/html_node/elisp/Bu...
https://shop.boox.com/products/go103
I dabbled in hardware and I quickly found you need millions to do anything.
However, this definitely is a market waiting for a product. I’d lean towards looking if you can add a custom screen to the framework laptop.
That’ll be much cheaper to build and easier. I reckon you’d only need a custom Linux driver for the screen.
Interesting, I like the idea of a custom screen on the Framework. I'm sure that may come with its own challenges as well :)
Here's a couple related videos:
https://www.inclusivecolors.com/
No AI or autogeneration stuff, more like an advanced editor that lets you tweak large sets of colors to your liking and test they pass contrast checks in advance before you start using them in your UI/designs.
Think of it as a true drop in replacement for postgres that runs on multiple nodes. It internally does replication, sharding and leader election. Just add more nodes and you get to increase both read and write scale.
I personally am working on a few things like online major upgrades, async replication for DR, enhanced backup/restore/pitr/clone capabilities, and more recently supporting DocumentDB extension which provides a true Mongodb API.
Being a startup I also get to talk with large customers, help with marketing content, and participate in database conferences.
The retail site is 100% custom code built in Crystal (server) and Svelte (client). The only part that isn't running my own code is our checkout flow -- I let Shopify handle everything after "Add to Cart".
Our system backend is a separate Crystal app which handles inventory management, pricing research, and price prediction. I've developed an ML model to do price prediction and it kinda works?
What I'm actually working on: This is my full-time gig and probably 60% of my time is spent running the business (going to coin shows, buying coins, photographing new purchases, etc.) and 40% is spent writing code to make the 60% run more efficiently :). It seems I have an infinite list of things to do -- improvements to our retail site; Improvements in how to efficiently go from coin to retail listing (turns out you can send just photos of coins to Claude and with the right prompt it will actually give you a reasonably good description that doesn't sound toooooo AI slop-y); Next "big" project is adapting our ML model for paper currency. The taxonomy is similar but not the same and there's a whole world of notes out there that need to be priced.
Always happy to talk about this stuff so always feel free to email with any numismatic (or tech-numismatic) questions. noah@rarity7.com.
I got tired of missing deliveries, so now software answers the buzzer.
Using a mix of telephony, transcriptions, and websockets. Webserver is in C++.
A block-based TUI note/task application using the Charm tools. I know there’s a billion note apps out there, but none fit my mental model, so just hacking my own.
Goal is to have a system of dumping info in and letting organization naturally rise from tagging.
Each tag has its own page that aggregates all blocks tagged with it, and can have a custom page layout depending on the defined “type” of the tag I.e. a person, project, etc.
Tasks are also first class citizens and can be aggregated with dependencies on other tasks.
I've been doing a lot of experiments evaluating LLM translation performance, and I used what I learnt (that LLMs make mistakes, but different LLMs make different mistakes, and they're better at critiquing translations than producing them) to make a hybrid translator (https://nuenki.app/translator) that beats everything else.
And I was invited to do a talk about that to a company, which was really cool! I'm 19, doing this in my gap year before uni.
https://brynet.ca/wallofpizza.html
But seriously, I'm looking for "no-strings" sponsors, if any companies (or individuals) would like to help support me so that I can focus on open source full time. Feel free to email me: https://brynet.ca/
This has been my WE project for a long time. But it's only working really now.
I think it works great! The problem is, I think it works great. The issue is that it is doubly-lossy in that llms aren't perfect and translating from one language to another isn't perfect either. So the struggle here is in trusting the llm (because it's the only tool good enough for the job other than humans) while trying to look for solid ground so that users feel like they are moving forward and not astray.
I've documented a lot of my research into LLM translation at https://nuenki.app/blog, and I made an open source hybrid translator that beats any individual LLM at https://nuenki.app/translator
It uses the fact that
- LLMs are better at critiquing translations than producing them (even when thinking, which doesn't actually help!)
- When they make mistakes, the mistakes tend to be different to each other.
So it translates with the top 4-5 models based on my research, then has another model critique, compare, and combine.
It's more expensive than any one model, but it isn't super expensive. The main issue is that it's quite slow. Anyway, hopefully it's useful, and hopefully the data is useful too. Feel free to email/reply if you have any questions/ideas for tests etc.
It is in the LLM comparison blog posts, at least the newer ones, though it tends to be on the first line.
So those numbers are from an older version of the benchmark.
Coherence is done by:
- Translating English, to the target language, to English
- repeating three times
- Having 3 LLMs score how close the original English is to the new English
I like it because it's robust against LLM bias, but it obviously isn't exact, and I found that after a certain point it's actually negatively correlated with quality, because it incentivises literal, word by word translations.
Accuracy and Idiomaticity are based on asking the judge LLMs to rate by how accurate / idiomatic the translations are. I mostly focused on idiomaticity, as it was the differentiator at the upper end.
The new benchmark has gone through a few iterations, and I'm still not super happy with it. Now it's just based on LLM scoring (this time 0-100), but with better stats, prompting, etc. I've still done some small scale tests on coherence, and I did some more today that I haven't published yet, and again they have DeepL and Lingvanex doing well because they tend towards quite rigid translations over idiomatic ones. Claude 4 is also interestingly doing quite well on those metrics.
I need to sleep, but I can discuss it more tomorrow, if you'd like.
It's basically a full-stack web platform written entirely from scratch. One of these days I'll write about it and get yelled at for reinventing the wheel.
But I'm using it internally and for my biotech clients and I'm still excited about it.
A game engine that lets you code multiplayer games without coding the multiplayer! My idea was to put multiplayer into the fabric of the programming language itself. This allows the engine to automatically turn your game into a multiplayer game, without you needing to learn anything about networking or synchronization. I imagine there are lots of people who have the talent and creativity to create a multiplayer game but don't have the interest or patience in learning how to code multiplayer, and so that's who this is for!
I've been working on this for 3 years and there were lots of tricky parts rolling back and deterministically executing a whole programming language, but it's working now! My next phase is to increase the breadth of features so better games can be made with it!
Simply emails you the story with chinese characters, pinyin, etc based on your level and story topics of interest
Its real purpose is twofold: I enjoy data modeling, and doing just enough Rails work to regain fluency after a gap.
I didn't end up sending many, but I've noticed that it's really difficult to get AI to write in a decent style. I've tried giving it a list of AI-isms to avoid, and it just doesn't work.
I has the most success with deepseek V3, giving a list of AI-isms, then ending with "You have been randomly assigned the following writing style/personality: [codeblock]" then a stereotype. Eg "Write in the style of a to-the-point, concise HN commenter" works alright, while "Write naturally and without AI-isms" is hopeless.
(Don't worry, I'm not using it for HN botting or whatever, it just tends to write in a nice style when you give it that)
* We are just starting with Projects in Porto Belo - Brazil. We are adding more countries soon, but it is worth to explore the catalog.
Now it is early alpha, but you can already give it a try.
I've seen some pretty fun novel use cases, such as (multiple!) people using it to pick out glasses, wedding invites & so on.
I recently completed a leaderboard function that cross compares photos from different tests using Claude, which was really impressive and scared me for my day job..
An AI data scientist for serious data work. Think of it like an AI native Jupyter notebook.
A web app for people to get tarot readings, and create their own tarot cards using AI. I'm enjoying working on this because I'm using as an opportunity to learn parts of the stack that I didn't usually do at my day job - frontend, design and marketing (my career has focused more on the backend).
In the long run, writing a gui for https://github.com/iesahin/xvc and Git.
A community powered, wikipedia-like, database for tracking police and their activities.
I was a YC founder in 2006 and still do software engineering and data science full-time, but on the side I also do Christian apologetics, helping fellow engineers/scientists/mathematicians seek answers to life's deepest questions.
Some cool articles for the HN crowd:
- My interview of Evan O'Dorney, a three-time Putnam Fellow and two-time IMO gold medalist, who converted to Catholic Christianity: https://www.saintbeluga.org/veritas-part-i-conversion-of-a-p...
- In-depth scientific overview of Eucharistic miracles: https://www.saintbeluga.org/eucharistic-miracles-god-under-t...
- Conversion testimony by the Chief Scientist at NASA JPL: https://www.saintbeluga.org/veritas-part-iii-bellows-of-aqui...
I've always enjoyed the farm-to-table concept, but I find it really hard to identify trustworthy companies. Wine has been done to death, but I feel extra virgin olive oil is currently underserved.
In particular:
To help solve forecasting & planning problems too hard to hold in your head, I’m converting natural-language formulations of constrained optimization problems into (back)solvable mathematical programs, whose candidate solutions are “scenarios” in a multi-dimensional “scenario landscape” that can be pivoted, filtered, or otherwise interrogated by an LLM equipped with analytical tools:
- 5 minute demo: https://youtu.be/-QdqiLp_9nY
- Details: https://spindle.ai
Eager to connect with anyone interested in similarly neurosymbolic “tools for thought”: carson@carsonkahn.com | +1 (303) 808-5874
Software wise doing proxmox + nixos LXC
Stuff that should be open source, open data
Made state of the art datasets, health models, research systems & agents so far @ www.ii.inc but the plan is ai first open source full stack systems for every regulated sector
Have a distributed ledger announcing soon to tie it all together and create a flywheel so more folk can get access to ai
A social event planning app to capture the fun my friends and I had with facebook events, but without the facebook. We have native apps for iOS, Android and the web. dateit has a generous free features compared to competing apps (SMS invites, photo upload, customization).
My cofounder and I have fully bootstrapped this and now it mostly self sustains which is an exciting achievement!
It's been a fun project to hack on for the last couple years and spawned several interesting side quests. For example, the backend is in Swift (as I started as an iOS dev) so that has been an exciting space to work in.
For older kids I've been making it easier to write games in Godot using TypeScript:
https://breaka.club/blog/godots-most-powerful-scripting-lang...
I'm building tooling using this technology which allows kids to create their own games, this is itself presented as a game kids can play through. Basically, imagine if Roblox actually delivered on its promises to kids.
Most of what we're building will be open sourced, so that older kids / young adults will be able export their projects and share their creations stand-alone.
Of course, telling kids they can create their own game is only relevant is kids want to do that. We're not locked into one way of thinking. We've also modified Overcooked 2, a traditionally co-op game and introduced a visual scripting platform which allows kids to code their way through levels:
https://www.youtube.com/watch?v=ackD3G_D2Hc
Overcooked 2 won't be the only game for which we do this. Introducing coding to existing games is a fun way to teach kids to code, without yet burdening kids with too much creative freedom. Kids already want to play these games, so this approach allows us to bring educational tooling to kids rather than vice versa.
I used to be Head of Engineering at Ender, where we ran custom Minecraft servers for kids: https://joinender.com/ and prior to that I was Head of Engineering at Prequel / Beta Camp, where we ran courses that helped teenagers learn about entrepreneurship: https://www.beta.camp/. During peak COVID I also ran a social emotion development book subscription service with my wife, a primary school teacher.
It’s just a basic IntelliJ plugin which provides an infinite canvas to add code bookmarks to. I work on a large code base and often have to take on tasks involving lots of unfamiliar areas of code and components which influence each other only through long chains of indirection. Having a visual space to lay things out, draw connections, and quickly jump back into the code has been really helpful
The canvas and UI is built using Java AWT since that’s what IntelliJ plugins are built on, but it occurred to me that I could just throw in a web view and use any of the existing JS libraries for working on an infinite canvas. React Flow has seemed like the best option with tldraw being what I’d fallback to.
But then.. if the canvas is built with web technology then there’s no reason to keep it just within an IntelliJ plugin vs just a standalone web app with the ability to contain generic content that might open files in IntelliJ or any other editor. I’m pretty sure the “knowledge database on a canvas” thing has been done a number of times already so I want to also see if there are existing open source projects that it’d be easy enough to just add a special node type to
There's no reason I should have my browser tabs crash when I view a pull request involving more than 100 files. The page should already have been generated on the server before I requested it. The information is available. All that remains are excuses and wasted CPU cycles.
I analyzed 7 years of Armorgames.com data (999 games) to understand web gaming market trends.
Key findings that might interest fellow developers:
User standards are rising: Average ratings dropped from 7.02 (2018) to 6.45 (2025), but the percentage of high-quality games (8.5+ rating) actually increased from 12.3% to 14.7%. This suggests quality polarization rather than overall decline.
Genre trends: Rising: Idle games, Strategy, RPGs (deeper gameplay mechanics) Declining: Traditional arcade/action games Stable: Puzzle and Adventure (web gaming staples)
Innovation wins: The highest-rated "hidden gems" all had one thing in common - innovative mechanics rather than genre variations. Games like "Detective Bass: Fish Out of Water" (9.3 rating) and "SYNTAXIA" (9.1 rating) show originality still pays off.
Market maturation: The correlation between rating and popularity is surprisingly weak (0.126), suggesting quality ≠ virality. However, play count strongly correlates with favorites (0.712).
https://thegreatestbooks.org/recommendations?demo=tgb2025
warning: account required, and the full featured version where you can specify book length, include/exclude genres/subjects, etc requires a membership. if you would like to test it though just e-mail me at contact@thegreatestbooks.org and I'll mark your account as paid.
I had fun with the interface - it's themeable, and inspired by classic cameras: lets you quickly switch between full auto/half auto/full manual modes with dedicated dials.
Going to add more features in the coming months, but the #1 focus is keeping it super simple and blazing fast.
Given that virtually all processing pipelines these days stack multiple shots to create a photo, as far as I'm aware this is the only way of getting a "traditional" single-exposure photo on iPhone, where the shutter speed is actually meaningful.
There are other camera apps that support Bayer RAW capture, but those support a bunch of other formats, and you probably don't want Bayer RAW for most of your shots anyways, so for my own workflow it's better to have a dedicated app that I can launch really quickly rather than tap around in menus.
We are up to almost 200 puzzles, with around 700 players per day. I've become much better at finding videos that work well as puzzles and am working on adding small quality of life updates.
An app that can turn anything into adorable stickers. In my region, people uses WhatsApp a lot, and there's this ability to create custom stickers. So we uses a lot of stickers on a conversation.
Separately, working on building an app to assist with cipher analysis of things like Kryptos and Bitcoin/crypto puzzles. Loosely modeled after CyberChef, but a native app that is capable of far more detailed frequency analysis and brute forcing with the GPU.
Also, experimenting with LLM workflows for both work and the rest of life. Prompt engineering seems like an incredibly valuable skillset for the next decade.
it's a tarpit idea that a lot of users and investors like to shit on so i decided to just build something that i like myself.
App with dynamic/flexible spaced repetition flashcards for language learning.
Recently I've added dialog & definition cards, so I can learn German from short dialogs with images and audio.
This involves exploring the ethics of using magic to accomplish tasks. The problem then boils down to a matter of epistemology— a testing problem. But testing is something you only do in the absence of trust. So critical thinking begins with the rejection of trust.
It’s been interesting to read about “Anomalistic Psychology” which is the study of magical thinking. Malinowsky commented that not a single canoe was built by Melanesian islanders without the use of magic, yet none of them would say that they could be built without craftsmanship.
Magic is the belief in the infallibility of hope, to paraphrase Malinowski. Which may explain why too many smart people are uncritical about LLMs.
Currently working on making it even more reliable, navigate pages and understand website images not just the text of the webpage. Also getting ready for a Product Hunt launch.
A tool that scans California legislation and flags bills that might affect your startup. You drop in a link to your website or a short description, and it returns plain-English summaries and impact analysis of relevant bills.
Everything from farm related stuff (water, food, shelter, etc.) to self-sufficiency (solar, etc.) to real time monitoring (which cameras, affecting power supply).
Who knows if it'll ever happen, but just planning everything in detail is a lot of fun. Especially with weird regulatory constraints where I'm living, there's a lot to watch out for.
Example: Solar panels at >3m height need building permits. Snow in winter means panels should be set up at a specific angle. So my initial plan of putting the panels on my 2.5m high carport doesn't work. Either lower carport, lower angle, different place or getting a building permit.
I have a couple of other apps that I have plans for, as well. If I get sick of playing traffic cop, with the phone app, I may take a break, and work on them.
Keep working on a test automation library that should allow writing browser/mobile tests easier with LLMs help, so I could focus more on testing, and less on automating.
https://github.com/damien/content-security-policy/tree/main/...
Once I’m happy with my take on a reference implementation I’m hoping to create some tooling with it to do some interesting analysis of CSP abstract syntax trees to identify things like policy anti patterns, reporting on capabilities a policy grants to a domain/resource, and a better mechanism for allowing tools like OPA, SemGrep, etc. to define and enforce rules on a policy.
In the app you pick a folder with videos in it and it stores the path, metadata, extracts frames as images, uses a local whisper model to transcribe the audio into subtitles, then sends a selection of the snapshots and the subtitles to an LLM to be summarised. The LLM sends back an XML document with a bunch of details about the video, including a title, detailed summary and information on objects, text, people, animals, locations, distinct moments etc. Some of these are also timestamped and most have relationships (i.e this object belongs to this location, this text was on this object etc). I store all that in a local SQLLite database and then do another LLM call with this summary asking for categories and tags, then store them in the DB against each video. The App UI is essentially tags you can click to narrow down returned videos.
I plan on adding a natural language search (Maybe RAG -- need to look into the latest best way), have half added Projects so I can group videos after finding the ones I want, and have a bunch of other ideas for this too. I've been programming this with some early help from Aider and Claude Sonnet. It's getting a bit complex now, so I do the majority of code changes, though the AI has done a fair bit. It's been heaps of fun, and I'm using it now in "production" (haha - on my PC)
It was in the works for about a year, and I'm now trying to find ways to make it more marketable and useful.
The three main things I'm working on are:
1. GitLab support (GitHub only at the moment) 2. A demo on the landing page that doesn't require any sign-up. 3. Better "vibe coding" experience through the Chat interface for those who want it.
I built it with TypeScript on the front- and back-end, React, Node.js, and PostgreSQL. And I over-engineeres it, as one does, with a Redis cache and websockets to push the latest data to web clients so the latest info is always shown without needing to refresh. I'm using the OpenAI API right now, but I want to switch to local models when I can invest in the hardware for it.
Edit: https://mysticode.ai - Would absolutely love feedback.
3D printing has completely changed my spare time usage.
You can use it to check and summarize news and social media, fill out forms, send messages, book holidays, do your online shopping, conduct research, and pretty much anything else that can be done within a browser.
Here’s a simple app my toddler made to generate toy trains[0].
“Real users” are using it to build personal software tools like finance dashboards, content generators, and educational apps.
Right now the functionality is great for many simple tools, but it’s notably lacking a first-class data layer (coming soon!).
All of the AI-generated code runs in secure MicroVMs, and the front-ends are just static assets, meaning the apps scale to zero when not in use.
We’re currently in the process of making the builder less of a “workflow” and more purely agentic, which should improve the overall success rate.
Can you explain what you mean by this? How did a 3-year-old (or younger) meaningfully contribute to the design of this app? Do they know how to read?
An AI-trip planner with a nice twist. It shows you everything you need to know about a place even before getting there: Images, a great summary, cost of living broken down weather conditions etc. It also comes with the usual features you'll expect in a trip planning app (ai itinerary suggestions, travel expenses tracker, group chat for group trips, google places integration for looking up places to eat, things to do, healthcare places and transportation centers, and a private travel checklist). You should check it out today!
A developer-focused IoT Cloud Platform. The idea stems from pain points experienced while automating an indoor farm a few years ago where I had to spend way too much time building the data collection and analysis infrastructure instead of focusing on the actual automation.
Devices connect via secure MQTT, HTTP, or WebSockets and send structured, typed data. Each device gets its own sequential mailbox for messages. You can trigger webhooks or broadcast messages to other devices based on incoming data, powered by programmable actions.
Just deployed to production. Currently working on Device SDKs (coming very soon) and time-series analytics. Check out the platform, we're in technical preview now. Happy to answer questions and appreciate any feedback.
App will be local-first and without locking important features behind a subscription.
Very recently I finished my bachelor thesis which was about this app (focus usability and market fit).
Also made this site a few days ago, get notified when it launches: https://dailyselftrack.com
More about me here: https://bryanhogan.com
A document specification for defining command line interfaces.
It's really just a fun side project to get more familiar with Go. The goal is to be able to generate boilerplate code in a few languages/frameworks and to generate documentation in a couple formats.
I have the impression clients like it when their code is “visual” so I’m trying to learn more of it to attract new clients
I'm going to plug a couple phones into it, but the main goal is to get all my old computers to talk to each other using their modems.
My middle school aged kids was able to escape with free proxy, vpn, tor etc in the past which forced me to figure out a way to lock it down totally when it's absolutely needed.
A 3D game to help students in grades 5-8 learn Arithmentic, Fractions, Geometry, and Algebra.
50% or more of middle school students experience math anxiety, and it's no wonder that so many people grow up believing, "I'm not a math person." Math can be incredibly fun and beautiful if approached and experienced the right way. Mathbreakers is a vibrant, interactive world where all game mechanics are built on intrinsic mathematical properties, so simply by playing the game, a foundation of understanding of those concepts is built.
We're doing early prototype testing now with a planned launch in September 2025. The game engine is PlayCanvas (engine-only) and the platform is WebGL (Mac/PC/ChromeOS).
The idea is simple: instead of blasting you with every keyword mention like F5bot, Replyhub filters for posts where people show real buying intent. These are posts where they ask for recommendations, compare products, or look for solutions.
It also suggests context-aware replies and helps collect leads from people who show real interest.
If you want to reach niche communities where people are actively discussing products, it might be useful.
Would love to hear feedback or questions from folks here.
It's not exactly version 2.0, it's built entirely from scratch and instead of only hacker news, it can also be used for similar forum sites like Lobste.rs, Tildes, Lemmy etc. In fact, it's built in a way such that more website support can be easily added on the fly.
I had restarted this 3 times in the last 2 years. But the current code is finally coming together to be released to the public.
Currently, I already have the reader part working. So one can read posts, comments, expand collapse comments, read articles etc. I don't have the writer part working yet (voting, favoriting, commenting). I am debating whether I should just release the reader part first and then continue working on the writer part and release it as part of update. Thoughts?
rather than manually write prompts for llms (which is like hand coding byte code for cpus), declare a structure and instructions and let the system do prompt writing for you.
it also exposes an optimizer which can do sophisticated prompt learning for tasks.
github.com/viksit/selvedge
https://bearingsonly.net - a submarine combat game in the browser.
The idea is that instead of running Nominatim which is costly you can just query Parquet files over the network.
Instead of a cluster of PostgreSQL servers all I need is a bunch of static hosting holding the dataset that's around 1Tb.
Send me an email if this interest you, it's in my profile.
While I love the official TypeScript handbook, it's not easy to play around with the code examples or approach it as a beginner. I started working on a complete TypeScript tutorial that also showcases some advanced use cases. All the code examples run in the browser, and there are some neat visualizations that clearly show what the type system has picked up.
I've been trying to fix some of the performance issues, finish writing all the content, and adding documentation before making the GitHub repository public - right now the page can hang when loading a long tutorial.
I am also working on a web frontend for rrdtool (for graphing collectd statistics): https://github.com/LevitatingBusinessMan/collectrack
And a wayland bar that is configured via a Ruby DSL: https://github.com/LevitatingBusinessMan/rubybar
Flutter + Flame + Spine + YarnSpinner. After a year of development, we’re coming up on some very fun milestones!!!
I've added a react SSR system. It has node subprocess code for rendering HTML from java via stdin/stdout. There's a Node/Vite proxy server that adds the fancy HMR you expect from SPA apps.
It supports multiple roots on a page, every SSR component has data-props and data-componentname, and the entry script just queries those attributes and hydrates everything.
The node renderer script is packaged as an EXE which is deployed in WEB-INF on the server.
It's fun to add the amazing React tooling to an old codebase. It also shows how you really, really, really do not need NextJS.
Unlike most programs, which just work in Fil-C with zero or no changes, libffi needs to basically be rewritten. Instead of using assembly for reflectively crafting calls it needs to use the Fil-C zcall api. And instead of JITing closures it needs to use the Fil-C zclosure_new api.
Should be fun
Would love some feedback overall and suggestions: https://phended.com
I'm developing a small community focused on rating TV show endings. I've grown tired of investing time in series that get canceled and end on cliffhangers. Unless the show is really good, and even then, I prefer starting knowingly.
iamwil•4h ago
Reactivity can update the state of the notebook automatically, so you don't have to keep track of which cells to execute again. Side effects are managed to make it easier to reason about while maintaining reactivity and ability to interact with the outside world.