So I converted the PDF form into modern, browser-friendly web forms - and kept every field 1:1 with the original. You fill the form, submit it, and get the official USCIS PDF filled.
Currently experimenting with a proactive agent, Don, that pops up like clippy and also works over email.
https://donethat.ai All the data security measures here: https://donethat.ai/data And other tools out there: https://donethat.ai/compare
I just updated the RAW pipeline and I'm really happy with how the resulting photos look, plus there's this cool "RAW+ProRAW" capture mode I introduced recently.
https://apps.apple.com/us/app/unpro-camera/id6535677796
I initially released it early last year and have been using it as my main camera app since, but I haven't mentioned it in one of these threads before. Unfortunately this post has come just a bit early for my most recent update to be approved; there's some nice improvements coming.
Plus, having full control over the way photos look, I've customized the output to match my taste; I don't think there's any other camera apps that produce photos quite like mine.
My default camera app is Leica LUX, but I’ve been curious about unprocessed photography and can’t stomach the cost for Halide.
Just installed :)
I'm enjoying building a website with solitaire and puzzle games.
I am currently rewriting the engine for the fourth time and plan to add 400 games to the platform in the coming months, as well as social features such as daily challenges, awards and leaderboards.
My main goal, however, is to make this project the largest collection of free modern solitaire games available for mobile devices and desktop computers.
So far, the project has been incredibly exciting, and I've learned so much!
https://www.inclusivecolors.com/
There's millions of tools that try to autogenerate colors for you using algorithms and AI, but they usually ignore WCAG accessible contrast requirements, don't let you customise the tints/shades, don't let you create a full palette of colors, and the colors often don't look right on actual designs.
This tool is meant to make customising tints/shades intuitive and quick enough in a few clicks via a hue/saturation/lightness curve editing interface that you won't want to rely on autogeneration. There's also a live mockup showing how your palette looks on a UI design that checks pairings pass contrast requirements, to guide you as you tweak your colors and to teach you the WCAG rules.
You can then export your palette to regular CSS variables, Tailwind, Figma or Adobe for use in your designs.
Really open to any feedback on features that would be useful! I think the only way I can make it simpler to use is to make it more opinionated about how your palette should be designed so interested in any thoughts about that too.
That's awesome, I haven't kept up in what helps to get into AI recommendations. Guessing it's related to search result rankings? Not sure if the site would be in the training data. Curious if you asked about accessibility as that's my focus.
https://euzoia.substack.com The concept: https://euzoia.org
Tried to do this as low tech as possible, so website is just an off the shelf notion wrapper
You get to choose the genres you're interested in, and it creates playlists from the music in your library. They get updated every day - think a better, curated by you version of the Daily Mixes. You can add some advanced filters as well, if you really want to customise what music you'll get.
It works best if you follow a good amount of artists. Optionally you can get recommendations from artists that belong to playlists you follow or you've created - if you don't follow much or any artists, then you should enable that in order for the service to be useful.
Does this include the "Liked Songs" playlist? I'm terrible at actually following artists.
But if you have any kind of playlists - it would work.
We are experimenting with Keybase key holders as CAs:
https://news.ycombinator.com/item?id=46576590
And also .gov email holders:
https://blog.certisfy.com/2025/12/using-gov-email-addresses-...
It's all self-service and requires no sign-up or download of anything, the app (https://certisfy.com/app) is an in-browser app and all the cryptography happens in the browser.
MetalLB - https://metallb.io/ load balancer
Traefik - https://doc.traefik.io/traefik/getting-started/quick-start-with-kubernetes/ ingress
Local Path - https://docs.apps.rancher.io/reference-guides/local-path-provisioner storage
Open to suggestions.First game is https://geocentric.top where users can sculpt a planet collaboratively and place trees or houses (for now very limited). I plan on making it an idle sim where players will be able to interact by dropping some food/events for the creatures on it to evolve through time.
Second, a remote logger/metrics/user management tool where once can track all their logs, live metrics, uptimes, identify users, etc. I hope to have a v1 during this first quarter and I'm currently my first user as I have it hosted at https://app.getboringmetrics.com to centralize all my side projects into a single platform.
This is really fun. Always a fan of any game that shows other people in real time
I wrote the firmware in Arduino, which was a great learning experience because I typically work with CircuitPython or Go, where I'm less constrained.
Used ai to create my own mind mapping tool for private use.
I also created a private cursor-like / loveable-like tool that I can use for my own vibe code prototyping on the go with my phone.
The app is entirely in Java, with javaFX for the UI and Lucene for the search engine. To read and render PDFs I use PDFium.
Holidays nuked all the hot-cached context in my head. I spent a few days just spinning wheels until it repopulated. But the basic idea works now!
Much testing and benchmarking work remains to make sure it's not going to lose data, and that it won't denial-of-service itself (because object-map -> facts fan-out is big).
Also a second giant blog post is due (following the one discussed above). Lots of notes have accumulated.
It will be fun even if the concept ultimately crashes and burns to the ground :)
In which case, there's always datomic and xtdb :D
(def repl-facts
[{:alt.site.evalapply/meta
{:description "Root namespace for a named site."}}
{:alt.site.evalapply/features
{:paid #{:alt.site-feature/feat-1
:alt.site-feature/feat-2
:alt.site-feature/feat-3}
:trial #{:alt.site-feature/feat-4}
:complimentary #{}
:available #{}}}
{:alt.site.evalapply/users
{:authorised #{:alt.user.evalapply/user-1
:alt.user.evalapply/user-2
:alt.user.evalapply/user-3}
:unauthorized #{:alt.root.*}}
:alt.user.evalapply/user-1 {:name {:first "Wiley"
:last "Coyote"}
:roles #{:alt.role.evalapply/owner}}
:alt.role.evalapply/owner {:rw #{:alt.user.evalapply/*}
:ro #{}}}])
(assert-facts! (#'user/system-state)
::app
:alt.root/user-1
(uuid/v7)
repl-facts)
(redact-facts! (#'user/system-state)
::app
:alt.root/user-1
(uuid/v7)
(subvec repl-facts 2))
(read-now! (#'user/system-state)
::app)
(edit: add note about upcoming blog)> but in Sqlite doesn't WAL mode give you a kind of super simple, super basic temporal data system
Yes-ish.
For one, it is unitemporal. For another, it is for SQLites own transactions, not of the individual datums. Litestream is the way to replicate WAL into an object store / elsewhere. litestream-vfs is also looking good!
https://litestream.io/ , https://litestream.io/reference/vfs/ (announced here: https://fly.io/blog/litestream-vfs/ )
I'm trying to emulate the data assert/redact properties of append-only bitemporal-by-design data stores like XTDB. The giant blog post builds up the intuition from scratch. Or at least tries to.
So my system is going to be: bitemporal data, enforced by SQLite schema and application checks, along with WAL replication for point-in-time transaction backup/restore. Both are entirely distinct, and unaware of each other.
Top of my ideas now: add "ask your LLM" buttons to my email submission form that opens ChatGPT/Perp/Claude and auto-fills a query asking it why you should be friends with me.
Sample links (I hope it says nice things about me!):
Claude: https://claude.ai/new?q=Do+deep+research+on+a+person%2C+dvsj....
ChatGPT: https://chat.openai.com/?q=Do+deep+research+on+a+person%2C+d....
Also working on a PRM. My website is https://dvsj.in - open to any feedback!
> "years of handwriting practice and all I do is type now, smh"
same feeling
It works with your existing phone system, so you can just add AI as a line without having to replace everything…
As a hardcore AI chat user, I'm often frustrated with the single-agent workflow, where a single context window is used for even very long conversations. If I want to change the topic, open a thread, or go on a tangent, I often end up compromising the main thread and I'm forced to copy context over if I want to dive into something.
To solve this, I'm working on a collaborative AI agent orchestrator that models the solution as a group chat with humans and AI agents, including an agent orchestrator.
You can spawn participating agents with the orchestrator who will decisively route messages to the existing agents, or spawn new agents if needed. Also, you can open agent details and send messages directly to existing agents, similar to threads in slack.
So far, I have MCP integrations working with Linear and GitHub, but plan to add many more.
I've been working on this just over 2 weeks, making heavy use of 4+ concurrent Claude Code agents. This would have been impossible otherwise.
If you're interested, feel free to DM on X.
Source code, in rust: https://github.com/David-OConnor/molchanica I've split out its building blocks into their own libraries on crates.io, for anyone building other bio or chem software. I don't think anyone uses them at this time.
Obviously, it all comes with its own sets of issues, but I am working through those as they come. But it is still a slow move solo.
On the other side of the spectrum are indies, who can afford to experiment a little more so you get more interesting uses ( like 'voicelines' by AI come to mind ).
I know what I want to believe, but it is hard for me to call it ( I think it is a temporary trend ), because I might be a little too close to it. After all, gamers have been conditioned to endure a lot over the past decade and mostly shrugged off most of the assaults on their favorite past time.
If I were to compare it to something.. it is almost like using LLMs for email summaries ( which is still what most bosses seem to be most pleased with ). There are better use cases. I think those were not explored yet.
I was personally thinking of making NPC less npcy ( not completely unlike dwarf fortress, but expanded ).
Also, a dramatic anime intro (complete with cheesy AI generated theme song and video) starring our foster kittens. It's been interesting to learn about some of the techniques needed for consistency, how to storyboard, etc.
Most of my work so far has been on the actual music-making interface, but I'm beginning work on the backend now. I've only worked with Django before (for a school project at Georgia Tech), so I'll be deep in the `sqlx` documentation for a while.
There's no manual, so use at your own risk (it's similar to tracker programs like FastTracker and OpenMPT): https://mondobe.com/tracker
I recently added better backend support for deployments, converted everything to async Rust, and setup Nix/Docker releases. I'm planning to build out some better example apps and workflows next, but everything will stay pre-alpha/unstable for now so that I can avoid getting locked in to any foundational issues. There are still a number of low-hanging breaking issues blocking the end-to-end usage which I'll need to address.
- https://github.com/rcarmo/gotel (an OpenTelemetry tracing collector/UI, under heavy refactoring)
- https://github.com/rcarmo/toadbox (a simple Docker-based agent sandbox to run Toad/OpenCode/Mistral inside, which I've been cleaning up for general use)
Not unlike digital photography and Instagram. Has it killed film-photo divisions of photography companies? Yes. Has it put professional photographers out of business? Hardly, and in fact, the opposite. What the ubiquitous phone camera has done, is expose a lot many more people to the steep challenge of making truly good photographs. It has raised the average population-scale level of photo-erudition and taste to ever-more sophisticated levels. And, it has pushed the envelope on what photography can do.
So---assuming the AI overlords prevail (which I'm deeply skeptical of, but suppose a trillion dollars are right and I'm wrong)---what happens when LLMs allow anyone to vibe-code their own SaaS or Database or IDE or bespoke health-monitoring app or whatever...?
(edit: fix formatting)
The important difference with digital photography is the phone photographer won't use pro lighting, different lenses, reflectors, bounced flash or other gear that contributes to the "pro photography" look.
With software, vibe-coders might use AI agents that have all the equivalent "pro photo gear" for professional output.
There's a moat around pro-photography protecting it from its snack-size phone-camera cousin. All those lights, lenses and tripods are the physical moat. If we ponder the question whether software development has an equivalent moat, the gp's gloom may be warranted.
- How much does a top-end "pro" AI account cost? Who can pay that on a sustained basis through a career? (Someone has to pay --- the cost gets loaded onto fully-loaded employee cost. And when it's the business doing it, you can bet that eventually CFOs are going to ask the hard questions.).
- How much does pro photography gear cost? Does every photographer own all the gear? (No. Renting is standard, and in fact, preferred. Nobody wants to buy thousand-dollar marco lens they will use only a few times to lean macro photography and do the occasional pro shoot. A specialist macro photgrapher will, because they can amortize the cost.)
Started playing with gas town which is really cool. I had a naive version built that was just not good enough. This feels like a step in the right direction.
Haven’t had much time to work on any of my physical hands on hobbies lately but maybe when the weather gets better I’ll head back out to the shop again.
The most difficult technical challenge has been designing a pipeline to fully automate choosing test & control locations using synthetic difference-in-differences.
You get tailored running schedules and also some body weight strength workouts and healthy meals all in one!
Building a really unique book recommendation platform to help readers find books they might not otherwise know about. We just started working on a full book app that will be a private book diary, along with recommendations and insights based on your Book DNA. Think Goodreads, but rebuilt for readers who want a private space to track what they read, keep notes, and get truly personalized book recommendations (like Pandora or Spotify for books).
https://downforeveryoneorjustme.com/
This is a fun one I do with a good friend. Basically, to see if a website is down, or if it is just you, along with reported reasons from the community. We are working to add user accounts so you can create your own custom lists of websites/services to track
Oh! This is very similar to the project I am working on, Colibri[1], a self-hostable ebook library. I'm an avid Shepherd user though, and I would love to make some kind of integration happen… would you be open to a chat??
I'd love to chat about API etc
Here are the links.
Chrome store: https://chromewebstore.google.com/detail/video-notes/phgnkid...
Firefox store: https://addons.mozilla.org/en-US/firefox/addon/video-notes-f...
I actually would love some feedbacks and suggestions from someone here :)
If you have not found it, you need to click on the extension icon, and then click on gear icon. There you will find the option to export.
I've got a nice ingest, extract, enrich process going for the graph - I'm currently working on a fork of claude-mem[1] that uses the graph as a contextual backend for agentic coding workflows.
0. https://arxiv.org/abs/2508.04118 1. https://github.com/thedotmack/claude-mem/
I have GBs of time-series data in a TimescaleDB database. It’s more complicated than this, but the gist is: I use natural language to ask questions about relationships in the data, Claude Opus 4.5 generates queries, and it finds patterns.
For example, I classify tens of thousands of news articles using different classification models. Then I ask Claude to write a query that tests for statistically significant changes in the time-series data at specific intervals after a given classification of article—and it finds patterns.
It passes train / test split validation. It will train on 2 years of data (2023 and 2024) and being able to effectively predict movements on the time series data using the classified news articles on the last year of data (2025).
- WIP: A FOSS, self-hosted Luma alternative (for use across our community initiatives)
I have a few other small side projects that generally improve my day-to-day life, including a better calendar widget for shift workers and a video speed controller that floats on websites where I frequently watch videos for easy access.
At Sylvester I stopped smoking.for the beginning and hard times I searched for a deflection and started with my own quitesmoke app - purio
I released it some days before a reached a workable state.
Here the result, if you want to check:
Android: https://play.google.com/store/apps/details?id=io.codingplant...
Feedback is welcome
Attracting new monthly sponsors and people willing to buy me the occasional pizza with my crappy HTML skills.
I'm sorry I don't have a better answer, unfortunately.
I have built a multi language bdd test framework. A human can write the bdd specs and an LLM will generate the code to match.
I am unsure if there is a need for a tool like this in the market but I am becoming more and more curious around databases so this felt like a lower barrier for my product-minded engineer skills to get into.
https://github.com/JWally/jsLPSolver
I'm tinkering around building "JARVIS" (I didn't want to come up with a clever self deprecating name - this works) - a personal project to manage my life. Integrates into Google Mail, Google Calendar, Trello, GroupMe, EveryDollar. Basically it nags me to do grown up thing and is a better UX than Google Calendar/Trello - I just talk to it and ask it things.
Also experimenting with a new Claude-Code flow; give the bot its own AWS account, Put a bunch of tickets on my personal JIRA, be persnickity about what constitutes "pass" and tell the bot "follow these instructions, pull down tickets until there are no more. Your branch cannot merge until you have integration tests passing in your own dev env first" (I use AWS CDK). Then let it loose to build. The instant feedback loop that Claude has with Build-Code->Deploy to AWS->Run Integration Tests->Address Failures is really nifty fwiw...
Googles Calendar is intimidating and confusing, same for all of them. I don't know what I want, but something with less red tape.
So I built a SEO/GEO Automation Tool for Small to Mid-Size Businesses who don't have a full-time team for that. [0]
The goal is to provide teams visibility across all the channels — Search and AI and give them the tools needed to outrank their competition. So far so good, the fully bootstrapped venture has grown over the last year and I've built quite a few big features — sophisticated audit system, AI Responses Monitoring, Crawler Analytics, Competitors Monitoring etc.
And I'm trying to offer this "data advantage" to website owners, so they can grow, and also this is something that will be hard to replicate (at least quickly) with AI.
The interesting technical bit: it analyses your historic general ledger to reverse-engineer how you specifically categorise transactions. So instead of generic rules, it learns your firm's actual patterns - "oh, they always code Costa Coffee to Staff Welfare, not Refreshments" - that kind of thing.
Posts directly to Xero, QuickBooks, Sage, and Pandle. The VAT handling turned out to be surprisingly gnarly (UK tax rules are... something).
Been working on it about 6 months now. Still figuring out the right balance between automation confidence and "just flag this for human review".
PrintRelay consists of a cloud server and lightweight clients that connect printers to the cloud via WebSocket.
We use the excellent SaaS PrintNode at work, but about twice a year we have connectivity/routing issues between AWS ap-southeast-2 and their servers OS. PrintRelay is my attempt to not need PrintNode. Because of this PrintRelay is PrintNode API compatible.
I've always had a bunch of small side projects that I want to do that aren't worth the overhead required to actually put them together & keep them maintained. So, I built a small Lua-based FaaS platform to make each individual project less work whenever inspiration strikes. So far I've built:
* A current-time API for some hacked-together IoT devices: https://time.bodge.link/
* A script for my wife that checks her commute time and emails her before it's about to get bad.
* An email notification to myself if my Matrix server goes down.
* A 'randomly choose a thing' page. https://rand.bodge.link/choose?head&tails
* A work phone number voicemail, the script converts the webhook into an email to me.
* An email notification any time a new version is released for a few semi-public self-hosted services.
* Scrapers for a few companies' job listings that notify me whenever a new job is posted matching some filters.
* A WebPush server that I eventually want to use for custom notifications to myself.
* An SVG hit counter: https://hits.bodge.link/
Since I'm already maintaining it for myself, I figured I might as well open it up for others. It's free to play with, at least for now.
Are there any auth protocols / flows you think would be important to support?
> Are there any auth protocols / flows you think would be important to support?
- I think API key passed via basic HTTP auth would get you pretty far. This is ideal for serving machine-machine requests and just requires that both parties can securely store the secret.
- OIDC is great for interactions that happen in the browser or if the function is serving multiple users, but is more complicated to setup and/or use correctly.
OpenID connect is probably the best for contexts where you want something served by multiple users and those users are humans.
> _Technically_ there's currently support for the cryptographic primitives required to do JWT (I added that because I wanted to support WebPush w/ payloads for myself)
This is probably a good intermediate solution FWIW - expose signature verification and HMAC APIs and allow a user to bring in their own implementation.
Looks cool, congrats on putting it out there as priced service!
And, same!
Except, it's just a repo organisation system (structure, conventions, and tools) that lets me share common "parts" across multiple "projects". No monolithic frameworks here.
Libraries are functions. Apps are objects.
However, normally, we use these as distinct artefacts, eventually leading to the "diamond dependency" problem (and lots of other annoying development-time stuff caused by libs / code that is "over there" (elsewhere)).
My "meta side project" solves, essentially the Expression Problem as it manifests in source code management (particularly, cross library / service / project feature development).
[0] https://github.com/adityaathalye/clojure-multiproject-exampl...
Do you think a service like yours with support for many variety of languages a good idea? Not in order to meet user demand but purely because I think it would "just" require running the program on the server using a different interpreter/compiler, assuming code sandboxing has been achieved to make the initial language work.
For example, I love the long list of languages supported by Code Golf: https://code.golf/wiki.
> Did you choose Lua because you love using it, or for some other pragmatic reasons?
A bit of both, though I'm literally drinking out of a coffee mug with the Lua logo on it that was given to me after playing a big part in making Lua a thing at a prevoius job. That might speak to my love of Lua.
> Do you think a service like yours with support for many variety of languages a good idea?
From a technical perspective, it would be relatively easy to add support for other languages, the biggest problem would be UI and documentation complexity. Each added language would either require a completely seperate set of documentaion or would require the docs to describe everything one layer of abstraction removed from the code people would actually be writing. Both of which would be less than ideal for my goal of extreme simplicity.
I think it can be a good idea, but to support something like that _well_ would require a pretty large team of people.
I do plan to support some level of 'other languages' for libraries, at a minimum some subset of native Lua libraries (ie. libs written in C). That means it would be possible to find a way to use pretty much any other language interpreter. However, I'm not sure that will ever be a top level feature, there'll probably always be some level of Lua glue code holding everything together.
I think that Acme is very underrated in its domain (a tool for experts). The coverage is minuscule. That’s why I am thinking about blog posts, maybe video tutorials. I do not know what should it be exactly.
I do not have any time estimates. I have a very demanding main job (I work as a software developer) and young family that I need to take care of. This project demands focused efforts and selectivity that I can barely satisfy. But I wouldn’t give up, the thing is totally worth it. If it’s going to take years, so be it.
If you have any comments, send me a letter at kalterdev@gmail.com. Have a good day.
https://github.com/ityonemo/clr
2. Molecular biology editing software. Will plug in to agentic ai workflows
A live-performance software with a focus on creating 'musically connected visuals'. Currently, the biggest connectivity is probably tightly tied lyric visualizations. Some recent examples:
https://www.youtube.com/watch?v=mRHLzuUBz5o - She Wants Revenge - I don't want to fall in love
or if you prefer Mashups:
https://www.youtube.com/watch?v=e_Xq8Dh4NEw - The Lovemakers – Shake That 50 Cent (50 Cent vs. The Lovemakers)
Finally, we decided to open source the exe.dev agent, Shelley. https://github.com/boldsoftware/shelley
https://git.sr.ht/~tombert/feocask
Called "feocask" cuz feo means "ugly" in Spanish and FeO to mean Iron Oxide for Rust. I thought it was funny.
I will admit that I had help from Codex, but I did write most of it myself, and I think the design is coming out kind of neat. I have a very strict "no lock" policy [1], including lockfiles, and this should still be safe to use across any number of threads, at the cost of N^2 reconciliation to the number of threads and a lot more drive space.
I like my design; I have an excuse to use Vector Clocks and Hybrid Logical Clocks and I think it might actually be useful for something some day. I'd like to eventually write something that goes a bit beyond getting parity with bitcask and optionally have the ability to automatically distribute across multiple nodes, but I'm still trying to think of a good design for that, because my current design depends heavily the atomicity of POSIX filesystem commands, and introducing the network introduces latency that would likely greatly degrade performance.
[1] At least no explicit locks. I am using Tokio channels and they are probably using locks in some spots behind the scenes.
I've also built a RISC-V emulator to integrate with this platform, so eventually it'll be able to run native binaries written in any language, completely sandboxed, completely built around message-passing. Basically a native, low-level BEAM-like platform to build an entire operating system and user-space.
While my day job is writing boring applications, this is the stuff that keeps me awake at night, and I would love so much to talk and write more about this, about the trial-and-errors I'm facing, but it's still so much in flux every week I'm exploring a new approach. Most of my work has been around the stackless scheduler, and I have a plan to achieve preemption for long-running or misbehaved tasks without having to compromise on memory usage (i.e. without giving each process its own stack and allocate memory for context switching).
Eventually I'd like to layer on top either Cap'n Proto or another high-performance serialisation system to create a distributed, introspectable environment of object-capabilities that are sending typed messages between each other, achieving the ultimate goal of creating an unholy hybrid between Smalltalk and the Erlang VM.
God, how I wish I was paid to work on this type of problems :-)
If this sounds close to your area of interests, please send me an email and I’d love to chat.
Did you apply for funding? Any subsidies?
I'd like to apply for funding by the end of this year, when I'll have saved enough money from contracting to dedicate myself fully to this project.
You might also want to try https://www.sovereign.tech/programs/fund which isn't limited to Germany AFAICT but does focus on infrastructure.
Singularity/Midori from MS Research have a lot of good ideas but I feel we don't completely have to compromise forcing a managed environment or language in userspace. I want to run native binaries in this platform, which of course would look a bit different than one is used to (no _entry, no dedicated stack, just a message handler that's called directly by the scheduler, no concept of syscall, just sending messages to a capability)
My Family recently (in the last couple of years) started to breed Ragdoll cats in the U.K.
In an attempt to support what's involved in this I built Ardent for them. It covers a bunch of the day-to-day concerns (weighing and health tracking), Lineage and Inbreeding prevention, and Owner Pack generation for handovers to new Owners.
So far I can see magnetic fields on a magstripe https://www.youtube.com/watch?v=c8nM4Z-hkTw with two polarisers (one of which I rotate in the video, which is contained in a 3D printed holder with gears I made).
I'm awaiting some different polariser film, to see if I can get it to work with a floppy disk.
https://marcelv-net.translate.goog/index.php?w=apparaat&id=3...
A hobby project I started putting together late last year; a little spot on the internet for prayer and reflection.
https://dugnad.stavanger-digital.no/
A pro bono tech consultancy for local non profits. The idea is to help them use tech to better deliver on their mission.
And I'm an agnostic!
Well done. I will use it.
One always-on Linux box to run apps, databases, CI, and AI agents without hyperscale complexity, surprise bills, or Kubernetes. AI-driven app explosion plus mature open-source deploy tools make simple servers fast, cheap, and fun again.
consistently getting traffic of 30k in a month with peaks upto 10k in a day
This has been on for a few months now but I'm thinking to add new features as users are asking for multiplayer support. Would love some feedback
--- Todo or else ---
The main thing is a new Go CLI called "todo or else". It scans your project for TODO comments and checks that they have deadlines
This allows you to hold yourself accountable for completing them
You can verify other things like whether TODOs have owners
I'm using Tree Sitter which means it can handle most programming languages (although, I'm shipping with a subset, as the grammars can get quite big)
I'm hoping to release this cross platform in a couple of weeks
--- Video: How PSX games were developed ---
A ten minute YouTube video about how PlayStation 1 games were developed
The tools, languages, and practices that were common in this era
Also the computers, software, programs used for creating assets (eg Irix Framethrower for SGI)
I'm hoping to produce a 10 minute video on my YT channel (jbreckmckye). Currently in the writing stage
--- Black Noise ---
A small, command line program for playing white noise
I think it would be a nice utility to have for focus sessions... maybe to double as a sort of pomodoro timer
It’s a super simple job management app for busy tradespeople to keep track of jobs.
The main use case is time pressed tradespeople who do most of their admin in the evenings.
The app makes capturing leads incredibly easy and quick. Lots of scope for extending functionality in the future in various directions.
(Not launched yet, squashing bugs and refining a few bits and pieces)
Comparing world countries on as many uncorrelated statisticals factors I could get my hands on.
Fun fact: in overall top 10, there is only one country that is not in Europe.
Currently I'm working on following features: - Multi user support (Team) on project level - Then I'll look into whether to add support for OIDC/SSO now or not - Alert on slack - Webhook support
Also working on a red chamomile (using beat red biosynthesis). Just for fun. Red chamomile tea!
The idea is to have niche invite-only genetically engineered flavors that I can bring to parties around SF :) what’s more special than a genetically engineered organism that you can ONLY get if I’m there? Good calling card
For example for the grape, I needed to knock out some tryptophan synthesis genes so I could redirect the bioflux. Problem is that in bakers yeast they have a whole buncha copies of their chromosomes, so I had to knock out one of the genes and replace it with a different gene from grapes. Did that with a quick lil CRISPR switch.
Had to electroporate tho because the transformation rates on wild/bakers/non-lab yeast are so garbage
DIY that will depend on your level of ability. You can do this stuff in your kitchen but learning it from a textbook will be daunting for many (most?) people.
The tough part is mostly the finesse in the simple things, like trying this in bakers yeast rather than lab yeast, or the genetic design.
Cost is quite high for mistakes, but LLMs are honestly quite good to help you out with the basics. You MUST at least try to read the papers though - it’s not like coding where you can mostly let it do its thing.
"A quick lil CRISPR switch" sounds like "oh just my homemade fusion reactor hooked up to my kitchen warp drive" to me, yet you make it sound so simple!
Reagents probably about $300, but you can use em in a bunch of reactions, in aggregate down to like $50.
The fundamentals of biology are really cheap, but the skills to actually do it are really expensive. It’s way more manual than you imagine - like how my thumb moves. The equipment is way more fundamentally basic than you imagine: the only thing you can’t 3d print and build from off-the-shelf stuff is the instant pot I use for media prep
If you are having trouble transforming, try spheroplasty.
Same answer for electroporation vs spheroplasty. I’ve found with wild yeasts or less tamed yeasts (pichia), sometimes just nuking the damn thing with kV will just work, whereas those chemical methods can be way more finicky. Time is money
How subtle are the flavors? Unsubtle enough that an oblivious taster might ask, "Does this bread taste like grapes to anyone else here?" Or does one need guidance to search for the flavor?
It’s kinda unfair how much better women were at smelling it (empirically)
When the Yogurt Took Over https://lovedeathrobots.fandom.com/wiki/When_the_Yogurt_Took...
Wait, that means a glorious period of peace and prosperity for all is nigh.
It kind of escalated a bit once I realized that different mixtures of bacteria cultures produce differently tasting dough/bread and that you can strengthen the grow rate by optimizing the external variables.
I’ve been thinking about trying this since my mom is gluten free
One starter uses the same schaer flour (which is based on a corn, rice and lentil mix). This one grows veeery slowly and needs lots of maintenance. But I want to keep it because that is the flour that also guests with celiac disease can eat. I'm trying to keep this one as clean as possible, same glass, same spoon, separate baking equipment etc.
The second starter is based on spelt flour. That one grows pretty easily, I used some turkish culture for it, and it's the survivor of the previous experiment :D
For both of them the grow rate is off. The first one grows around 1/5th until it needs more flour and water, the second one grows around 1/3rd until it needs maintenance. The standardized maintenance for wheat flour is 2x growth so you always have to fill the glass half way and then mark it with a rubber band.
You put in all your bookmarks (also pdfs or epubs) and it puts them in a queue and tracks your progress. Read for as long as you want to and if you get bored with an article you just move on to the next one. Supports highlights and annotations as well as creating spaced repetition cards out of those annotations.
Really reduces the friction for me to start reading and it has made a noticeable difference to my media consumption throughout last year.
Started out as an exploration into the incremental reading concept, but it's become my primary interface for reading and I use it every day.
I haven't really talked about this to anyone yet, but it's getting to a point where it's polished enough for others to use.
It's currently completely free and you can try it without entering your email.
It’s text based, so it is basically an advanced editor/viewer for one long text note.
Birdhouse: https://img.notmyhostna.me/cRQ1gJfZCHjQKwFrgKQj
UI:
- https://img.notmyhostna.me/Hnw4qcvbg1ZQCrFxzGMn
- https://img.notmyhostna.me/62TFwSXSRRbCfxDz297h
- https://img.notmyhostna.me/40qhgHmSqQsrGr8BC7Db
- https://img.notmyhostna.me/9bgz4GYsjQH33n3MtWKp (Face labeling, so I can show thumbnails of the actual birds that visited and train a ML model on it in the future)
I approve :)
My wife said 'you look bored you should build a bird table'
'Now shes not speaking to me as she found out shes 5th on the list'
Right now I only use it so that my thumbnails of pictures from the camera are centered on the head in the UI as I couldn't find a pre-existing model that does it for animals. I'm thinking that maybe having this data set of a few hundred bird faces will allow me to train a small one in the future to do it more automatically. If not...I at least learned something new about building models!
AI models and open source robotics for food production.
From backyard gardening, subsistence farming, urban gardening, and other forms of small scale agriculture.
We believe no one owns nature and that all growers have 100% right to repair any equipment we offer.
Our first IOT device (greenbox) is in an open beta for 2026. Please reach out to support@gthumb.ai if interested.
Happy for alpha users; it's really early days right now. Email in profile if you want to give it a try at no cost.
Im launching a new word game next week which I’m super excited about. If you do play it and have any feedback do shoot me message!
Im not sure if the first two count as puzzles, more games. Lol
https://github.com/WillAdams/gcodepreview
Hopefully I can restore the OpenSCAD interface layer and get it working with OpenSCAD Graph Editor:
https://github.com/derkork/openscad-graph-editor
again. Having trouble finding FCG examples which do more than move in straight lines though...
https://github.com/storytold/artcraft
AI tools are becoming incredibly useful for our industry, but "prompting" without visual control sucks. In the fullness of time, we're going to have WYSIWYG touch controls for every aspect of an image or scene. The ability to mold people and locations like clay, rotate and morph them in 3D, and create literally anything we can imagine.
Here are a bunch of short films we've made with the tool:
- https://www.youtube.com/watch?v=tAAiiKteM-U (Robot Chicken inspired Superman parody)
- https://www.youtube.com/watch?v=oqoCWdOwr2U (JoJo inspired Grinch parody)
- https://www.youtube.com/watch?v=Tii9uF0nAx4 (live action rotoscoped short)
- https://www.youtube.com/watch?v=tj-dJvGVb-w (lots of roto/comp VFX work)
- https://www.youtube.com/watch?v=v_2We_QQfPg (EbSynth sketch about The Predator)
- https://www.youtube.com/watch?v=_FkKf7sECk4 (a lot of rotoscoping, the tools are better now)
If you give it a try, I'd love to get your feedback. I'd also like to see what you're making!
Public dataset for exploring 50,000+ 401(k) plans holding $7.5T in assets.
You can look inside a company like Google and see what employees invest in (mostly 2035-2055 target date funds) or how much they contribute ($30K - likely using Mega Backdoor Roth)
It started as curiosity. I wanted to see if I could express business logic as simple choices and let the numbers fall out on their own.
The app is built interactively in Streamlit. I do not sit down with a spec or a backlog. I add one small idea at a time, refresh the page, react to what looks wrong, then adjust. It feels closer to sketching than programming. The interface tells me what the logic should be next.
Underneath there is a growing pile of rules about the business I am in.
I do not write code in the traditional sense.
I have never coded before and are solving my own problem done by one, this sure feels like magic!!
I blogged about the whole build and coding project [here](https://partridge.works/screenie-christmas-project-2025-26/)
Code for the Arduino and also the web app is Open Sourced [here](https://screenie.org/get-device/selfbuild)
And you can play with it [here](https://www.screenie.org)
I found myself switching a lot between apps to get the same info, lots of copy/pasting.
Example, URLs in bookmark (which I forget about), project descriptions , images, folders.
So I built a Mac app that is similar to Raycast, but just for notes. If I want to save a webpage, I click control+option+C and then a window pops up to describe it.
If I press control+option+V, I get a spotlight like window where it does full-text search of all my notes and descriptions and filter so I can either:
- Open
- Insert the data into the current app (chrome, slack, ChatGPT).
I’ve been using it for a few weeks now, and not sure if others will find it useful.
Building something is a nice break from the corporate world.
https://blazingbanana.com/work/whistle
Currently tidying up some internal code (also removing the larger model on mobile platforms) and implementing proper diarization (who said what) so that it can be used for more than just personal dictation.
My iOS developer account is _finally_ approved so it will be available through the proper app store soon.
Finding other co-founders based on proof-of-work
Overall, Minifeed keeps chugging along, fetching new posts every day from almost 2k feeds. I'm hoping to find some nice and ethical monetization strategy for it this year.
The stack is Babylon + React + Capacitor, which was easy to step into as a full-stack dev with zero game building experience. Currently seeing what I can do to fix some performance issues, though it still works decently for a graphics heavy incremental/idle game.
Beta is still open for Test Flight. Can sign up via https://blossom-beta.blindsignals.io
- Skyscraper, an iOS native app for Bluesky with focus on Liquid Glass UI. Launching hopefully in a ~week, and TestFlight available at https://testflight.apple.com/join/RRvk14ks
Then also working on a website/web tool that does the following: - A keyword/term notification service that observes the Bluesky Jetstream for usages of the term and sends email alerts.
- Provides an HTML/JSON backup archive of any Bluesky account. Quick way to archive popular accounts, politicians, public figures, etc.
- Trending Hashtag lists, to see what is trending the last hour, day, week, and month.
The web services all are available at: https://api.getskyscraper.com/tools/
https://soatok.blog/2025/10/15/the-dreamseekers-vision-of-to...
Running on a single-core armv7. It includes a VIO and a nice loop closure. I am now optimizing it further to see if I can fit some basic mapping too.
It's an IoT Cloud Platform built for developers. We're still in technical preview and are currently working on adding more telemetry to our small device agent written in Rust.
Check out our docs at: https://fostrom.io/docs/
One aspect that HN may find interesting is my use of Bayesian optimization to control and perfect key experimental settings. About a dozen of the wave plates and other optical components are motorized and under computer control.
Given a goal metric like "maximally entangle the photon pairs" the optimizer will run the experiment 50-100 times, tweaking the angles of various optics and collecting data. Ultimately it will learn to maximize the given cost function.
This sort of thing is commonly done with tools like Optuna during NN/LLM training to optimize hyper-parameters, but seems less common in physics especially quantum photonics. I'm using a great tool called M-loop to drive the optimization, which was originally developed for creating Bose-Einstein condensates.
One piece I helped with is SenL, a “sensitivity level” framework for AI labs. It’s like a practical clearance system for AI assets. Not everything in a lab is equally dangerous, so you label assets by sensitivity (weights, training data, eval sets, agent tooling, deployment configs, etc.), then tie that label to concrete controls like who can access it, where it can run, what logging is required, and what monitoring / two-person rules apply.
If anyone’s curious, SL5 is here: http://sl5.org/ and the SenL framework is part of the published artifacts.
My read here is that you're implying that if an attacker has access to, for example, weight data, they can invariably find a way to exploit it.
If that's a correct assumption, I think you're playing an unwinnable game, since attackers always have indirect access through inference of the model. It feels like locking down weights/training data/etc is the ai version of security through obfuscation.
Just my 2c, for what it's worth
I think this is exactly why some of the work is moving away from “assume unrestricted API inference forever.”
For example, we’re prototyping ideas like air-gapped or very low-bandwidth inference gateways, where interaction happens over narrow channels (serial, optical, audio, etc.), with explicit threat models and monitoring. The point isn’t that this’s practical for today’s models, but to reason about what inference might look like for AGI/ASI-level systems where the risk profile is fundamentally different.
Others are thinking along similar lines too. For example, this SPAR project on constrained and minimal inference pathways: https://sparai.org/projects/sp26/rec7NyTst8Upfp83l
The idea is to rely on kernel Wireguard, and process packets in kernel space (via eBPF) for maximum performance and minimal CPU overhead. I plan to use egress and ingress TC to “apply” the policy at both sides. XDP is faster, but only works on ingress, which is not sufficient for a mesh VPN imo.
Netbird already exists in this space, so this is mostly a learning exercise, and maybe a reference implementation for those learning eBPF in Rust.
Goals and constraints:
1. Single digit CPU overhead for multi Gbps bandwidth (probably a bit too ambitious, but we’ll see)
2. Linux only
3. No hole punching or complex NAT handling
4. Basic policy language for L3 and L4 traffic (L7 requires punting packets to a userspace proxy)
For a couple of years now, I’ve wanted to build a service that lets you subscribe to new version notifications for any kind of software—desktop apps, drivers, packages, and more. As a Unity developer and consultant myself, I’ve always wanted to know the moment a new version becomes available. I value being at the bleeding edge, especially since a large part of the value I bring as a consultant comes from both the breadth and depth of my knowledge of the Unity engine.
That’s why I’m currently working on a niche version of the broader VersionAlert idea specifically for Unity, which is why the domain currently redirects to https://versionalert.com/unity .
The only other service I’ve found that really addresses this need is https://newreleases.io . I actually spoke with the very nice husband-and-wife team behind it and even offered to buy the whole project, but their asking price was about 10× higher than what I was offering—which I understand, no hard feelings. I still wish them the best of luck.
If anyone is aware of other projects or services in this space, I'd be happy to learn about them and chat.
Github: https://github.com/tirrenotechnologies/tirreno
Live Demo: https://play.tirreno.com (admin/tirreno)
We are a 501c(3) and are actively fundraising to build a tower here in Shadow Hills and are launching our live stream and regular schedule February 1st. So far we have about 60 shows in the schedule.
If you're in Los Angeles and have an interest in radio, please hit me up.
I’m setting up the basic site, which is not a huge deal, but I’ve been inspired by more recent language designers having a streaming presence, so I am working through test runs of streaming my development.
I hope to start with demos of the basic language features and then move on to streaming both a reimplementation of my compiler and on a Rocq implementation of the syntax and semantics of the language for proof work.
The language has a rather small niche at first glance, so I’m hoping to use the streaming as a way to explore areas of appeal and maybe draw some interest. A low level concurrent and parallel ‘functional’ language with very non-traditional syntax and a modal, dependent type theory is not going to appeal to everyone, but hopefully I can find some interest eventually, even if just to hang out on chat and talk about the subject.
I'm not particularly familiar with array based languages, but are you inspired by them at all? Seems like a similar concept.
What're your goals for the language? It would be cool to see a parallel execution model unify SIMD, multithreading and gpu. I bet people with a lot of money would be interested if you could apply it to ML
I don’t really have any lofty goals, the language exists to let me have my personal ideal programming language. One of the things I would like to see adopted, by any language, is that the language is able to encode and present provable guarantees about code. So, memory safety, freedom from UB, lack of overflow (integer or stack) is provable in some subset of the language and the proof is done in a logically consistent, verifiable manner.
Of course most of those proofs are not anywhere near feasible for general case code, but the language can restrict allowable constructs in a computation, function, process, or module to a set that makes the desired properties provable by construction.
The language should be capable of unifying and abstracting over SIMD (or other hardware implementations), GPUs, and OS or Userspace multithreading at levels of abstraction from assembly to Haskell or other high level languages.
If you are a brand who needs to deploy advertisements at scale, don't hesitate to reach out.
Declarative and functional in nature. Just a manifest wiring functions into a DAG and a Postgres SQL functions that manage the graph of state machines. Simple in principle and very opinionated.
Replaces 240 lines of manual pg_cron -> pgmq -> Supabase Edge Function boilerplate with 20 lines of explicit DAG definitions. Currently Supabase-only (leverages their primitives) but planning to make it agnostic for vanilla Postgres setups.
Live demo / explanation here: https://demo.pgflow.dev
- A sports club management platform, and a way for end-users to sign up for sports events, lessons etc.: https://mojtim.ba/en/
- Given the raise of AI, I'm a hiking guide and would like to have that as an alternative, an outdoor activity agency - https://boa.ba (still very WIP)
Started when I was struggling to read books in English. Pushed an open source version back then (https://github.com/baturyilmaz/wordpecker-app), later added more features, and now working on a mobile app.
Recently started testing alpha version, fixing bugs and introducing new features right now (https://alpha.wordpeckerapp.com/).
My end goal is to build: an AI language learning companion that knows what you read, listen, and watch, knows you as a friend (real life, who you are), then helps you improve using that context. If you're B1 at language, it creates a personalized path to get you to B2, then C1, and so forth using your context.
It's not mobile friendly yet, but maybe that'll be a next weekend project. At least you can view a video of a level on mobile.
My kids are loving it.
https://game.stackybird.com/ and the source https://github.com/jtwaleson/stacky-bird
Open Tech Calendar, listing virtual tech events that include community participation: https://opentechcalendar.co.uk/
A listings site for virtually attending FOSDEM. The live streaming is great but the official site only lists sessions in the local Brussels time zone. You can choose your time zone here: https://virtuallyattend.teacaketech.scot/fosdem/2026/
Comes with Robot, Turtle, HTML/CSS (pixel diff tasks) and a gradual introduction into programming concepts. Currently on JS, literally right now GPT-5.2 helps me adding Python.
I've integrated a simplified clone of Replicube. I hope to integrate ideas from Human Resource Machine and Turing Complete later this year.
I bear heritage of Eastern Bloc-typa math/programming olympiads, combined with front-end/product skills. So I kinda owe to ship this thing to community of fellow secondary school CS teachers.
Surprisingly, nothing comes closer in terms of depth and usability in the classroom for ordinary 12yo kids.
I test this thing 5 times a week in my classroom and I constantly polish it at night. 100% vibe-coding.
It’s absurd and will probably appeal only to the descendants of Ken Jennings.
Another itch of my own to scratch but thought I'd see if I can make some side income as well.
It's a web based bookmark manager with extensions for Firefox and Chrome.
You can easily import and export bookmarks so you have all your data whenever you need.
One main thing I really like that I think makes it stand out is the ability to export the contents of a bookmarked page to an epub file to read later on your Kindle or other e-reader device.
Looking for any constructive feedback on this! Thanks.
https://patcon.github.io/polislike-human-cartography-prototy...
Just paint with colors, click a painted group, and see what differentiates your painted groups. (Chrome on iOS has issues fwiw)
This is building on the philosophy of democracy-bolstering tools like Pol.is, which I've worked with (as a researcher/practitioner) for almost a decade
Edit: formatting
Now working on comprehensive benchmarks for another tool I built, https://github.com/sibyllinesoft/scribe. Results thus far showing it reducing agent token usage by ~80% in real world tasks, but I need to repeat to get variance. Hopefully I can get a writeup out soon.
A browser extension (Chrome & Firefox) for simplifying my online grocery purchase workflow from Cookidoo to Knuspr.
I was tired of my weekly workflow of copying, pasting & sorting the grocery page for each item.
Also launched my first Hugo blog. Really nice experience so far. Wrote more detailed about the extension as my first blog entry: https://lars147.github.io/blog/
While the official data is technically public, it's practically inaccessible (buried in XML feeds and legacy sites).
Phase 1 was building a modern ingestion engine and freeing the information to make it more accessible. The goal is to make legislative data as accessible as sports stats.
I'm almost ready to launch the MVP; I'm just doing some bugfixes and testing the database now! (If you want an early look at the MVP, my email is in my profile.)
The next phase is what I'm most excited about: visualizing this data and using LLMs to provide insights.
I have
1. A scraper/parser that ingests the data daily and transforms it into .parquet files.
2. A LLM summarizer to summarize larger discussions.
3. A static site that gets automatically generated (based on the .parquet files) to provide insight.
What makes your tracker ‘real-time’? Does it ingest the livestream of a parliamentary session while it is happening?
With Codebox, you can define workspace templates or load workspace configurations directly from Git repositories. At the moment, it supports Dev Containers and Docker Compose, with plans to expand to additional configuration methods in the future (for example, Terraform).
Below are some resources where you can learn more about the project, including an article that explains how Codebox works and the source repositories (mirrored on both GitHub and GitLab):
* Medium article: https://medium.com/@dadebianchi2003/introducing-codebox-an-o... * GitHub repository: https://github.com/davidebianchi03/codebox * GitLab repository: https://gitlab.com/codebox4073715/codebox
Double TAP is lightweight testing framework where users write black box tests as rules checking output from tested "boxes". Boxes could be anything from http client, web server to messages in syslog. This universal approach allows to test anything with just dropping text rules describing system behavior in black box manner.
I've been working on this for several weeks/months and I'm happy with the result!
Vibe coded it as my programming skills have eroded with time (or they never existed).
I would really appreciate some feedback.
A beginner-friendly programming language for 2D games where multiplayer is automatic. Intended to be an engaging way for teenagers to learn to code by making games they can play with their friends. Like a blend of Scratch and Roblox. I've been working on this for 3 years!
I found that sometimes I would rather interact with a chat interface to debug an issue or brainstorm architecture solutions in my repos. Agents are great for giving the model access to everything and letting it figure it out.
By manually prompting, it forces me to keep my metal model of the codebase up to date, and it allows me to provide just the context I want to the LLM.
Recently, I've been very motivated to make one niche crafted service after another. For myself, family and friends. But struggled to find a compelling hosting solution for projects that only has and will have only a single user for years. I bought the cheapest mac mini M4 on sale, put it in basement and started working on some cli+daemon to help me automate all things around it. The biggest risk is security, so probably gonna rely a lot on Cloudflare at start.
Just released v0.12.0 which has a lot of package cleanup and some important bugfixes. Next, is making the relay infrastructure much more lighter, requiring less synchronization.
Personally, I'm using the hosted version[0] (which is just a repackage of the open source version with dynamic with tokens) to expose my NAS and syncthing web UIs to manage them while I'm away. Sometimes even through my phone (with termux)
Main workload is done by the backend (serverless functions).
I am currently working on a HubSpot extension (that uses same backend) with the goal to target few other platforms where users work, and integrate the functionality into their daily workload, as opposed to having it as standalone website or mobile app. I have fun doing it.
Repo: https://github.com/xvandervort/graphoid
Claude Code is doing the majority of the coding, with close supervision from me. I write notes while I'm working on it. Notes are here: https://www.patreon.com/cw/aiconfessions
Building this for personal trainers with home gyms and small 1-2 location owner-operated facilities. The big players (Mindbody, etc.) are overkill and expensive for this market.
Core features: class scheduling, member booking, Stripe payments, and workout programming (the part most gym software ignores - trainers still use spreadsheets or generic apps).
Stack: React 19 + Vite frontend, ASP.NET Core 10 API, PostgreSQL, multi-tenant architecture so each gym gets their own branded experience.
Currently polishing the member dashboard and workout tracking UI. The goal is something a solo trainer can set up in an afternoon without needing to call sales or sit through demos.
OpenRailwayMap is a project focused on displaying everything railway related in the world, powered by OpenStreetMap data.
Since the last time I posted on HN it's gained a decent amount of traffic and users. I'm particularly happy with the jobs section, which is growing into a high signal-noise source for European tech jobs: https://techposts.eu/jobs
The reason I started this website is because so much incredible innovation and growth in Europe flies under the radar. If you ask Americans many will say it's just "banks and museums", stagnant, or worse. But the reality is there is a huge spectrum of exciting companies starting and growing here. We have space launch companies, battery companies, AI companies, and a whole bunch of other interesting stuff. It's an exciting time to be a European in tech!
"Ask TechPosts.eu: Is there an EU Cloudflare alternative? I'm interested in CDN, DDoS mitigation and basic web security." Links to https://techposts.eu/post/153
"3 comments" below it links to https://techposts.eu/post/154
It looks like the latter is correct.
I would have sent feedback direct, but I can't see an about page or anything.
So much going on in Zurich. Lots of robotics startups. I passed through in the summer and never knew...
Any plans for a simple way to search for posts?
EDIT: There is indeed a bug: clicking on a title which is an internal link sends you to the wrong page -- you have to click on the comments count.
Would love feedback from other WFH folks — a weird amount of my productivity comes down to how I start the day.
I spent a day over break building a better UX for reviewing coding agent plans.
Plannotator - Annotate and review coding agent plans visually, share with your team, send feedback to the agents with one click.
Demo video: https://www.youtube.com/watch?v=a_AT7cEN_9I
Despite being intensely technical and detail-oriented, certifying construction is still mostly done by hand over email!
Our niche is full of folks whose lives we can improve with a portal for document management & comms, plus a sprinkle of AI for document understanding.
If you know someone - anyone - wrangling too many documents via email. Please reach out.
I'm sure there are a ton of other projects out there that do this, but I couldn't find one that fit my needs exactly, so I threw this together in a few hours.
claude-image-renamer uses Claude Code CLI to analyze screenshots and rename them to something actually usable. It combines OCR text extraction with Claude's vision capabilities, so instead of "Screenshot 2025-12-29 at 10.03.10 PM.png" you get something like "vscode_python_debug_settings.png".
A few things it does:
Handles those annoying macOS screenshot filenames with weird Unicode characters
Uses OCR to give Claude more context for better naming
Keeps filenames clean (lowercase, underscores, max 64 chars)
Handles naming conflicts automatically
If you're on macOS, you can also set this up as a Folder Action so screenshots get renamed automatically when they are saved to a folder, typically ~/Desktop. This is useful if you take a lot of screenshots and hate digging through "Screenshot 2025-12..." files later.I use screenshot on my phone to remember stuff, so this way it's super easy to search those text files if I'm trying to find something that I only vaguely remember.
Remembering specific words is hard for me, but I can get neighboring words and get in the ballpark, and simple text search or claude code gets me the rest of the way.
The screenshot gets uploaded from my phone to Dropbox, then on my desktop at home a script just periodically checks if there's any new screenshots. It's been running since Jan of last year, so coming up on a year now.
I wanted something more useful to carry than typical gooey lips balm full of petroleum and silicone.
Currently it's fully-conformant to v2.0 of the spec, while I'm working towards implementing the recently released 3.0 version.
It is similar to Meta's COCONUT. However, instead of the training forward and backwards passes of the reasoning tokens being done serially (slow), they are done in parallel (fast). At inference the reasoning tokens are still decoded serially. The trick is that even though training was done in parallel, the reasoning tokens are still causal and you still get the benefits of increased computational circuit depth.
The kicker is that the architecture is modality agnostic (it doesn't rely on language for its chains of thought), and I want to use it to bring COT reasoning to protein and anti-body generation. Basically, I hope for it to be the OpenAI o1 or DeepSeek R1 for domain-specialized scientific AI.
Even that "modern" printing stack in Linux is 20+ years old, there's still such an unbelievable amount of basic bugs and low-hanging-fruit optimizations, that it's kinda sad.
Not to mention that it still maintains ALL its legacy compatibility, as in supporting ≈5 different driver architectures, 4 user-selectable rasterizers (each with its own bugs and quirks).
The whole printing stack is supported by 4 people, 2 of whom are doing that since the inception of CUPS in 1999. Scanning is maintained by a single person.
Ubuntu 26.04 LTS is expected to be the last version with CUPS v2. CUPS v3 drops current printer driver architecture and introduces proper modern driverless printing with the wrapper for older drivers. Many open-source drivers are already use this wrapper, but expect a huge disarrangement from the users, as none of the proprietary drivers would work out of the box anymore.
Do you care about printing? Want to improve printing & scanning stack? Contact OpenPrinting! https://github.com/OpenPrinting/
I have literally thrown one of those "winmodems" [1] out of the window back in the days. I then went out and drove with my car on it. I then put it in a bench vise until its PCB shattered to pieces. Utter destruction, much to the amusement of my brothers.
These were the days.
And big thanks to GP for his work of CUPS / Linux printing.
Archaeological evidence strongly suggests earlier Americans preferred hands, feet, and occasional repurposed sporting equipment.
The entire state of Nevada?
The hardware safety mechanisms are usually robust (USB communication is handled by "Formatter Board", all the mechanical stuff is in the "Engine Controller" power).
Newer Linux-based models have filesystems, software, and vulnerabilities, printer hacking on Pwn2Own is an every year common occurrence. These could be permanently bricked by the software in a common sense, and would require a firmware reflash using the bootloader or external means.
>Or is it possible to work on hardware support without having a physical device?
Absolutely, but for me this is very inconvenient—like the debugging over the phone.
Sometimes the bug is as low-level as in the USB stack: https://lore.kernel.org/linux-usb/3fe845b9-1328-4b40-8b02-61...
>I assume it is impossible to test each and every one printer and scanner, so there is probably some clever tricks there, right?
Not much, unfortunately. There's ongoing work on modern (driverless) printer behavior emulation, but it is under heavy development and not ready yet: https://github.com/OpenPrinting/go-mfp
Nothing for the older printers and scanners which require it's own driver, of what I'm aware.
A printer driver is something like a protocol converter. Roughly speaking, it binds some printing API's in the some kind of printer framework or service on the host to the right language (which may have vendor-specific nuances even if it is some kind of standard0.
I wish Openprinter luck, as it has been announced in the end of September but nothing out there yet, not even the crowdfunding campaign.
I know it’s not a popular opinion here but I think that Windows has two killer features that are always overlooked- the standard print dialog (and all the underlying plumbing), and the standard file dialog (at least until Windows 8).
The ability to print and to interact with files, that just works, without having to retrain people every time a new OS comes out, and without having to reprogram your apps or write your own drivers and/or UI, is incredibly important.
Yes, I know Linux and Mac have the same, but IMO Windows was light years ahead for decades, and is still more consistent and easy to use.
[1]: https://pdfa.org/microsoft-adds-print-to-pdf-native-to-windo...
Maybe CUPS needs a Heartbleed-scale problem to motivate more support.
You can see what it looks like right now here: https://x.com/simonsarris/status/2010359423806615907
It is based on a garden designer I made for myself to keep track of my rose garden (I have over 100 roses) and orchard (I have about 15 trees): https://garden.simonsarris.com/
However, my version was very specific to my needs, so this general version requires a lot more work to get it usable for a lot of people.
Short and sufficient version here: https://doi.org/10.5281/zenodo.17288906
Extended version here: https://go.expinent.com/VlChn65
For the past month I’ve been working on a creative / VFX / 3D tool that connects Apple devices into an all-in one node editor: https://subjectivedesigner.com/
With it, you can build interactive experiences, connect device sensors, compose shaders with AI models, orchestrate real-time data flows, and create projects that span across the entire Apple ecosystem. I’m posting about it regularly on social media and you can see some of it here: https://x.com/sxpstudio (Though it’s still early and most content is on socials thus far).
It’s done fully in SwiftUI + metal and also a good occasion to ramp up on agentic-powered software engineering. So far it’s been a lot of fun and working really great for me. And to be clear I’m absolutely not talking about vibe coding :-)
One thing I’ve always disliked about RSS (and this could actually fix it) is duplicates. When a new LLM model drops for example there are like ~5 blogs about it in my RSS feed saying basically the same thing, and I really only need to read one. Maybe you could collapse similar articles by topic?
Also, would be nice to let users provide a list of feed URLs as a variable instead of hardcoding them.
I don't actually plan to run this as a service so there's some things hard-coded and the setup is a bit difficult as you need an API key and a proxy. Currently it's just experimentation, although if it works well, I'll probably use it personally.
Similar to Cameo but hyper specialised for league of legend streamers, if this shows some traction we'll expand to other games, and then to other industries (think a tennis star reviews one of your tennis points, beats out a generic happy birthday message?)
You connect GitHub, CI, Sentry, and Linear, and it takes tickets all the way to production. Claude writes the changes, BacklogAI handles tests, migrations, feature flags, staged rollouts, etc...
It’s clearing months of backlog work in hours, and a couple teams I’m working with have already stopped hiring because it’s cheaper than adding more developers. It's crystal clear that developers and designers won't be needed in a couple of months because claude increases productivity by at least 120x and one or two PMs can do pretty much everything.
My stack is Claude, v0, nextjs, shadcn, clerk, supabase, vercel.
If these productivity gains were real, no one would be giving you access.
Instead of filtering commands with heuristics (which agents work around), it dry-runs entire scripts in a PyPy sandbox, captures every command and file operation, then shows you exactly what will happen before anything executes.
I’ve just added checkpoint/rollback so you can undo changes if something goes wrong. Currently working on example scripts for common sysadmin tasks (nginx config, log cleanup, cert audits, etc.)
Started this incarnation on Dec 30, 2025 -- but it's the crystallization of decades of earlier prototypes, all the way back to my Commodore-64 Logo Adventure. Built on top of Anthropic's Skills framework, extended with seven innovations (and counting):
1. Instantiation -- Skills as prototypes creating instances with their own state
2. K-lines -- Names as semantic activation vectors (Minsky's Society of Mind)
3. Empathic Templates -- Smart generation based on semantic understanding, not string substitution
4. Three-Tier Persistence -- Platform (ephemeral) → Narrative (append) → State (edit)
5. Speed of Light -- Many turns in one call, minimal tokenization overhead
6. CARD.yml -- Machine-readable skill interfaces with advertisements
7. Ethical Framing -- Room-based inheritance of performance context
Lineage: Colossal Cave → TinyMUD → LambdaMOO (filesystem as world). Papert's Logo and constructionism (learnable microworlds). Will Wright's SimCity and The Sims (I worked on the originals) -- the "Simulator Effect" where players imagine more than you simulate, and the SimAntics visual behavior programming language.
YAML Jazz: Comments aren't ignored -- they're semantic. The LLM reads and interprets them. A comment like "# gentle but firm" on a character trait actually affects behavior. This inverts the traditional "comments are for humans" assumption. Comments become part of the program and data.
The core idea: instead of prompt engineering, you give the LLM a github repo filesystem to inhabit: a persistent microworld. Seymour Papert's Constructionist philosophy comes alive, with Minsky's K-Lines pulling the strings. Skills are programs (not documentation). Characters have persistent state in directories, and can reflect on and edit themselves. Everything is inspectable and editable by human AND model. Model and platform independent. Runs on Cursor and other tools and orchestrators.
The proof is in adventure-4 -- a complete text adventure with 150+ files, 6000+ lines of session transcripts.
Repo: https://github.com/SimHacker/moollm
MOOLLM Manifesto: https://github.com/SimHacker/moollm/blob/main/designs/MOOLLM...
The MOOLLM Eval Incarnate Framework: https://github.com/SimHacker/moollm/blob/main/designs/MOOLLM...
Adventure 4 Example: https://github.com/SimHacker/moollm/tree/main/examples/adven...
My sessions as proof it works: https://github.com/SimHacker/moollm/tree/main/examples/adven...
79 Anthropic Skills (standards compatible, plus extensions, intertwingled with k-lines) and growing: https://github.com/SimHacker/moollm/tree/main/skills
A guided tour through the MOOLLM skills and microworld -- Session Log: K-Line Connections Safari: https://github.com/SimHacker/moollm/blob/main/examples/adven...
Adventure Compiler Design Discussion -- Adventure Uplift Session Log: https://github.com/SimHacker/moollm/blob/main/examples/adven...
MOOLLM Kernel: https://github.com/SimHacker/moollm/tree/main/kernel
Happy to answer questions about any of the weird design decisions!
https://nthesis.ai/public/702445eb-f0e4-4730-b34f-f34eb06dd6...
Or you can do a basic text search: https://nthesis.ai/public/hn-working-on
But the graph view seems broken?
And for the data, it would be nice to have the original URL for each comment as a reference.
I didn't realize that you actually could provide a working link back to Hacker News but it seems HN does support that. Thanks, I will give that a try!
This how the graph looks like, not clustered by tag or anything ... (I was expecting a view like Logseq or Obsidian)
That being said, there isn't much data for this month yet. If you look at last month's data (https://nthesis.ai/public/e4883705-ec05-4e7a-83ac-6b878cc1e8...) , clusters are more apparent (particularly if you view the tags instead of summary).
Never published to Steam before, it’s been a fun learning process.
I'm building a newsletter called Tech Talks Weekly[1] where my readers get one email per week with all the latest Software Engineering conference talks and podcasts[1] published that week.
In January, I've released a paid tier[2] where my subscribers additionally get:
1. Access to my internal database of all the talks and podcasts since 2020 (+48,000 in total) where they can search, filter, sort, and group by title, conference/podcast, view count, date, and duration.
2. See the list of the most-watched talks over the last 7, 30, 90 days, 6 months, and 12 months based on number of views.
3. Get category-based view of new talks & podcasts by tech stack, language, and domain (Software Architecture, Backend, Frontend, Full Stack, Data, ML, DevOps, Security, Leadership and every major language & ecosystem)
[1] https://www.techtalksweekly.io/p/what-is-tech-talks-weekly [2] https://plus.techtalksweekly.io/
We've built a new auth platform with some new identity primitives and capability-style tokens using biscuits.
Right now, I'm trying to figure out ways to apply it and am looking into offering integrations with extremely fine-grained access control that wouldn't have it otherwise. So adding a fine-grained access layer in front of stuff like backend-for-frontend (BFF) systems, brownfield stuff with poor auth, or even OAuth stuff that just have really coarse scopes.
Are there any integrations out there that people want but the access control is bad for them? I'll build one for you!
I’ll highlight a couple:
- an “aichat” command group that enables continuing work from a session that is at full context usage, by creating a new session and injecting session lineage pointers so the agent/sub-agent can recover arbitrary full details from ancestor sessions. So no more compacting needed.
- aichat search command: TUI for rust/tantivy-powered full text search across Claude and Codex sessions. CLI/json mode for agents to search for past work.
- Tmux-cli tool + skill to enable CLI agents to interact with scripts (including other agents) running in other Tmux panes. Like Playwright for the terminal. Multiple CLI agents can collaborate/consult etc. Agent can run and interact with interactive CLI scripts.
I’ll take a look at the others you have.
The aichat approach does require intentionally asking the agent to find specific earlier work: it doesn’t automatically have “awareness” of prior work. I think of this as the “unknown unknowns” problem. This is where creating explicit memory artifacts can be useful since we can pre-inject recent work-summaries into context. So I’m thinking about a lightweight hook based system to automatically create memory artifacts or work-logs of some sort.
Been working on a Google-sheet backed workout tracker, which basically makes it easy for me to see what I've done or not done recently and pick the next thing to do. I'm thinking of open sourcing this soon, but need to do some "de-monolithing" first.
I'm building a small web app with an interactive tutorial and a browser-based singleplayer game that helps people learn and practice Doppelkopf. I've just released an English version:
It's meant to be embeddable and hackable, serving as a building block for custom IDEs as opposed to being IDE-like VSCode. I felt the web IDE space was uninspired with apps built around VScode/Monaco effectively being hosted a VSCode instance with a pre-installed extension and config.json. (Aside: perhaps there's a business opportunity for VSCode-as-a-service where client apps simply bring their own config). I'm dogfooding this library in building an algo trading IDE.
Ships 2kb and smoothly handles 50+ million line files. 1 billion lines with the high-capacity extension. Also, it can function as a TUI or terminal on the web because the core implementation concerns efficiently rendering plaintext in a fixed-width grid layout.
I see this doesn't use <textarea> or contenteditable, I'm curious how close can it get to native controls or can this replace them? Things like mouse selection and usual hot keys like Cmd+A and maybe other things.
Technically, there is <textarea> but it's strictly for a more seamless interface with copy/paste. The web standard's clipboard API was not satisfactory.
> mouse selection
There was previously mouse selection but that was removed after I rewrote the selection logic. This will be re-introduced as an extension that be can opt into. It's relatively trivial for a mouse but will be harder to be robust with touch events.
One tricky bit will be the UX of scrolling and selecting at the same time, especially on mobile phones.
> usual hot keys like Cmd+A and maybe other things.
The general philosophy is to not assume any native controls, have zero default key handlers and have clients bring their own extension that adds this functionality. Cmd+a in particular would perhaps be something I can add directly in the default keyboard controller.
The underlying API is defining operations from two coordinates. From the current exposed API you can also get the last line, and from there get the last coordinate. This could be a new API I expose. With (0,0) and the last coordinate, the keyboard handler would just call that.
The one limitation is that we have to convert a bunch of lines of string into a giant string into the user's clipboard. This overload the browser's clipboard buffer once we get to a million lines of code.
The program is gonna do, what I am currently doing by hand, opening files and copying function/method signatures usually, from files all over the place.
The key here is to fetch into the context window only what is needed for one-question/one-answer and no more, hence the Context Minimization. Context fetched is gonna be specified by the programmer, for example sig(function) fetches only the signature, while @function captures the whole body. sig(Struct) is gonna fetch the fields and signatures of all of it's methods. sig(Trait) similarly.
In my view, giving the A.I. more information than needed, only confuses it and accuracy degrades. It is also slower and more expensive but that's a side effect.
The project is in early stages, for the moment it calls ast-grep under the hood, but eventually, if it works as it is supposed to, I plan to move to tree-sitter queries.
If there is a similar project somewhere I would appreciate a pointer to it, but I am not interested in implementations of agents. My program does not give the A.I. a whole view of the codebase, only the necessary points specified by the programmer.
Doors lets you explore URL addressable 3D rooms that link together seamlessly via portals. The idea is that people would upload rooms to the internet (to github, S3, whatever) and connect them together to form one giant inter-connected space that would be a real trip to explore.
Right now rooms consistent of a: - Manifest JSON file that points to requisite resources and configures portals - An optional skybox - An optional background music track - A .vox file containing voxel terrain data
Here is a video I filmed on my phone of flying through a room that links back to itself: https://www.youtube.com/shorts/BCqOYTISS_k
Portals can be arbitrarily sized and everything is prefetched/loaded seamlessly in the background.
I'm nearly done - I just need to add in a very lightweight interface and give the code a bit of a spit shine (I will open source it - so I want it to look pretty)
EDIT: As an aside, I finally decided to give this whole Claude Code thing a go - I purchased a max subscription and I'm trying to write as little code as possible. I certainly wouldn't call what I'm doing "vibe-coding". I discuss a feature in plan mode (incl. how I want to implement it in high level terms) iterate on the plan 2-3 times until I'm satisfied and then let it rip. I'm both very impressed and quite frightened by the productivity boost...
Nottawa's a free macOS app for making live audioreactive visuals. I'm trying to position it as a 100% free, batteries-included alternative to Resolume and TouchDesigner.
Not a tonne of users yet, but I'm hoping to get some traction in 2026. Would love love love to hear some feedback!
Working on CiteLLM, an API that extracts structured data from PDFs and returns citations for each field (page + coordinates + source snippet + confidence).
Instead of blindly trusting the LLM, you can verify every value by linking it back to its exact location in the original PDF.
There have been lots of cool technical challenges through the whole process of building this, and a very nice variety of different kinds of work.
I'm working towards using the outputs from this language to build out levels and assets for a browser-based game I've been dabbling with over the past few years.
Now I'll have to bite the bullet and start working on marketing!!!!
https://disclosure.launchbowl.com/
A little cool tech detail, I didn't want a backend or to store the information of people's reports anywhere. To get around the requirement, I made the form deterministic and populated on load from a data stream of bits (form is made from booleans or bitfields) from a decoded base-64 query parameter. A cool side-effect of this approach is I can update the query parameter in the URL in real-time as you fill out the form so if you reload it remembers your form without any local storage or cookie use!
It has AI summarize buttons (gemini-flash-lite is so fast!) along with other features I wanted. I'm almost done adding a "war mode". The user (me!) specifies a list of OSINT style x users which show up sequentially in a grid along with a ticker on the bottom of polymarket markets I've chosen. War mode is also obviously only available in dark mode...
I've never created a game before, less so a NSFW one, and I'm not sure how it's gonna go, but it is very different compared to other things I've done before. The game itself is done in Rust, compiles to WASM to be run on the web, and I've found 3 artists and one voice actress who is helping me with the art/audio stuff. So far a lot of fun, although managing a fleet (4) of contractors is less fun, although still new so a little bit of fun :)
Still experimenting with different ways to make learning easier using LLMs.
I put together Codose as a tool where you paste a link to an Exercism or LeetCode problem, and it spins up a code editor with an AI tutor that walks you through the solution step by step, with mini lessons along the way when you need them.
You can try it without signing up but I’m on the Google AI Studio free tier right now, so I’m not sure how many uses can it handle
My friendgroup has gotten increasingly concerned with the gradual enshittification of various services we depend upon, and are looking at various alternatives. In some cases there are good selfhostable options (nextcloud, mattermost/zulip), but I decided to write my own tiny PWA to cover facebook-like needs.
The goal isn't really to scale to >1000 users, just to be simple to spin up for a small group and be easy to manage. I'm hoping to run multiple instances, eg one for family, one for college friends, one for local friends, etc.
My process has been pretty ADHD though. I recently read the phrase "It doesn't have to be done, it just has to be perfect" and felt personally attacked.
It’s fully open source: https://mogami.tech
---
[1]: https://nickel.video
I've been buying vintage lenses to try out.
I built it because I wanted access to my music (and videos) on my phone without running a full media server like Plex or Jellyfin, and without syncing files locally. The app streams files directly over a VPN (WireGuard / Tailscale), mainly via SFTP/FTP, and plays them as-is without re-encoding or server-side indexing.
You can browse folders directly, or let it build a lightweight local index for faster artist/album browsing on larger libraries. It started as a personal tool, but I’ve been polishing it after some feedback and opened a small TestFlight beta:
https://testflight.apple.com/join/PaQAsGcM
Still early and rough in places, but it’s already replaced my own setup.
- ETL is vanilla Python - Orchestrated with Cron and SIGUSR1 - http is Nginx -> uvicorn-> FastAPI
Data lives in a CAFS indexed with Xapian and my current time to load is ~200ms
It's been A LOT of work all far. I came in with some skills and have learnt a lot along the way
It's an "italian-oriented" curated remote-jobs and remote-work community, currently ~10k subscribers across newsletter and Telegram. What began as "let's share good remote roles" is evolving into paid job postings, sponsorships, and coaching for companies and devs.
Do you know you can hire remotely in Italy?
Building a webapp to keep track of your playlists and notify you when a song disappears
Here's a walkthrough of how ALT works: https://youtu.be/m2MER0KW3yA
versionary.app
Current status: JavaScriptCore builds in the JSCOnly config and panics on start.
JSC has extensive use of uintptr_t as a carrier of pointers and that's what most of the panics are about
I train BJJ and kept hearing the same complaints from academy owners regarding attendance tracking, comms, missing payments, etc.
So I'm building a tool for student tracking with belts progression, automated payments, attendance-based promotion criteria, and a tablet check-in system.
Focusing on Spanish-speaking markets first since it's completely underserved. Currently onboarding early academies, and will market it in the US/UK soon.
I built a browser plugin called "Visionary" that overlays meaningful descriptions and context directly onto stunning pictures of the day.
I noticed that existing picture-of-the-day plugins were built over two decades ago and never evolved to harness the capabilities of modern artificial intelligence. AI can transform the picture viewing experience by distilling complex descriptions into accessible insights and providing references to explore the core concepts in the photo more deeply.
You can get a sense of how this works by visiting https://picture.learntosolveit.com
Get it for Chrome or other Stores
https://chromewebstore.google.com/detail/picture-of-the-day/...
I've been using linux for a few years now as my main/only OS, but have mainly just used Linux Mint as a sorta plug-and-play distro.
Looking to revive my 15 year old ThinkPad (1st laptop ever!) by building up from a base Void linux install. As I'm doing it I'm writing install-scripts and getting my dotfiles in order (after never really doing so for 17+ years as a programmer lol), so I can repeat the process in the future on other machines, or when I want to do a fresh re-install.
So I started to work on a side project called "Arch Ascent" for addressing these situations. It seems like it is becoming a kind of a architectural governance tool for visualizing and validating software system dependencies.
- Dependency Graph — Syncs projects from SonarQube and visualizes component dependencies using Cytoscape.js, with graph algorithms for detecting cycles (SCCs), clustering (Louvain), and computing coupling metrics
- Visions — Workspaces for exploring architectural scenarios. Each vision can have multiple versions/variations with different layouts while sharing the same definitions
- Layers — Named groupings of components (e.g., "Domain Layer", "Team Ownership", "GitLab Groups") that can be visualized as colored regions on the canvas
- References — Named sets of components defined via Tag expressions, layer membership etc.
- Statements — Architectural intent constraints that can be evaluated against the actual dependency graph, such as existence, containment, exclusion, cardinality etc.
The plan is to also incorporate Grounds, which are Intermediate stable states on the path from the current situation (ground zero) toward a vision. Each ground represents a releasable milestone that moves the architecture closer to the target vision without necessarily fulfilling all its statements. Grounds enable incremental architectural evolution with well-defined checkpoints.
Stack: Django, HTMX, Cytoscape.js, pyparsing for natural language parsing of References and Statements.
What are those? You know… open, freedom, and privacy respecting technology. Recent products are the Starlite tablet, Furilabs and Fairphone. We've been waiting for these products, well over fifteen years since the introduction of the iPhone and iPad.
We wrote our full thoughts on the subject at: https://aol.codeberg.page/eci/
Despite posting many times, we haven't been able to start a discussion here. Maybe we don't know the right key words, or posted too late in the day, not sure. But I know someone is interested in the subject because there have been three huge discussions this week about how "Linux is good enough now."
That has been true for a decade in my experience, but no one seems to be talking about the new mobile hardware available. I hope to work on bringing these efforts together.
I'm trying to solve this problem with AI agents that help interviewers to understand who actually can code and understand the code they're presenting.
A tool for bulk mass delete posts, replies, Quotes and Reposts for Threads platform.
Still adding new features
I’m not a developer by trade but I’ve been learning iOS dev for about 6 years now. It’s become my project that I just keep working at since I personally use it a lot.
The app lets you save your favorite locations, add notes to them, add photos, check weather, tag them for better organization, and archive those tags for future trips. You can also mark off locations that you’ve been to already: think breweries or a coffee shop when visiting a new city.
For the next update, I’m working on a task list functionality for each location. The idea came as a shopping list based on which stores I go to but it can work for any other context as well. This way I can get rid of my shopping list from my task apps.
In terms of weather, I’m also adding historical averages to the forecast to have some sort of context to the weather.
Also leaning more into marketing these days (hence this post) and designing a new icon with some custom art work to give the product some sort of personality. I started learning affinity design to just do it myself so I learn some design software along the way.
Anyways, if you download it, I’d love to hear some feedback. :)
I looking mainly for European market that is willing to self host your content or the model on your own servers, we will provide the technical support in doing so that and easily plugin the AI powered search or chatbot to the website.
But it is also possible to use servers provided by us to host the content and the LLM models.
What is really cool about it is that it natively capture connections between atomic ideas and evolve them. Which I believe it gets me one step closer to syntopic reading machine.
Features:
- No Sign-Up Required + Instant PDF download
- Live PDF Preview
- Shareable Links
- Multiple Templates (incl. Stripe-style)
- Flexible Tax Support (VAT, GST, Sales Tax, and custom tax formats with automatic calculations)
- Multi-Language (Support for 10+ languages and all major currencies)
- Mobile-Friendly
GitHub: https://github.com/VladSez/easy-invoice-pdf
On holidays finished adding flexible tax support + other improvements: https://github.com/VladSez/easy-invoice-pdf/pull/163
Would love feedback, contributions, or ideas for other templates/features =)
I've live deployed an AI-constrained governance system on Ethereum mainnet - and I was hoping to share this with the community.
If this is NOT the right place.. please let me know if there is a better alternative in sharing my full repository.
https://github.com/thetaicore/tai-constitutional-architectur...
An alternative front-end / game discovery service / price tracker for GOG's catalog. Mostly manually enhancing the data from the API (90% heuristics, 10% human effort, as the total dataset isn't large enough for it to be worth doing otherwise), and offering a wider range of filters for it all.
I'm grouping all related products together for a more complete overview, and have recently added library import, where I mostly 'solved' the issue of GOG not recognizing that you own certain games if you got them as a freebie or as part of a since-delisted deluxe edition. Just now starting in on incomplete "series" listings, seeing what'd be involved with making them contain all relevant games, and then exposing that.
Cross-account, cross-region search. Need to find an IP? Easy peasy. Need to find all the cruft Todd left behind? Search "todd" and see every todd-test-server-1 and todd-alb hanging around.
I've added insights for security, ops, and cost savings – minimal right now but expanding.
Early access/MVP mode. Feedback welcome.
(Original comment was not a joke btw, it's part of another thing I will publish eventually)
I'm working on a subscription-based short-form video site called NICKEL[1]. I felt gross about using YouTube but wanted to share my gaming clips, so I made my own thing. Then I thought about making it sustainable so here we are. I'll have an update to the mailing list out in a few hours, I'm "building in public."
My feature-complete deadline is April 15th and I think I'll make it. If you want to check out the UI, visit the explore[2] page. I have it setup to redirect to a public video while I work on the intended UI (a design challenge I've never tried before but we've all seen). I'm thoroughly enjoying figuring out how streaming video works and how best to optimize things.
---
[1]: https://nickel.video
A transformer-based (but not LLM) chess model that plays like a human. The site right now is very rudimentary - no saving games, reviewing games, etc., just playing.
It uses three models: * A move model for what move to make * A clock model for how long to 'think' (inference takes milliseconds, the thinking time is just emulated based on the output of the clock model) * A winner model that predicts the likelihood of each game outcome (white win / black win / draw). If you've seen eval bars when watching chess games online, this isn't quite the same. It's a percentage based outcome, rather than number of centipawns advantage that the usual eval bars use.
Right now it has a model trained on 1700-1800 rating level games from Lichess. You can turn it up and down past that, but I'm working on training models on a wide variety of other rating ranges.
If you're really into computer chess, this is similar to MAIA, but with some extra models and very slightly higher move prediction accuracy compared to the published results of the MAIA-2 paper
Is cleaning the tank a lot of work? Agh!
https://helmtk.dev is a toolkit for helm chart maintainers, including a structured template language than can compile into helm templates, and a test suite tool for writing tests in javascript. Super handy I think.
https://blog.atlas9.design is about building a better software experience by solving more of the common stuff from the start: IAM, builds, API design, etc. I'm currently designing and building a Go-based framework to start.
Now working on a second tool that monitors public reports on illegal dumping, broken streetlights and more. It tracks how long the municipality takes to resolve them
Lately, I've developed an interest in local politics and started reading policy documents, following city council meetings and even lobbied for a local park. Presenting public data clearly can help shape opinions and keeps the city council accountable, especially important with the upcoming city council elections.
I'm 17. For the past 6 months, I've been diving deep into Rust to answer a question for myself: "Is Rust actually viable for complex, enterprise-grade backend architectures, or is it just hype?"
To test this, I built a full distributed e-commerce system (22 microservices, gRPC, Event-Driven) using Axum and Tokio. This is not meant as “how everyone should build”, but as an exploration of trade-offs.
Some hard lessons I learned along the way: Complexity vs. Performance: … The "Memory Shock": … (idle baseline; load benchmarks still in progress) Over-engineering: …
I'm currently squashing the final bugs…
I'd love to hear from senior folks here: …
We delivered our first 500 units last month and got positive reviews, but lots of small issues to straighten out.
https://x.com/jaybaxter/status/2001729207873999094
https://x.com/FeifanZ/status/2001168758740803781
Would love to pick one up if there's ever a sale!
Got tired of helping enterprises run Concourse themselves, so we productized it. We've deployed and maintained Concourse for Starbucks, Accenture, Sky UK, and others over the years—CentralCI packages that operational knowledge into a SaaS.
Why Concourse over GitHub Actions?
* fly execute lets you test pipelines locally before pushing. No more "commit and pray" * fly intercept drops you into a running container to debug failures * Resource-based triggers can monitor anything—not just git pushes * Full pipeline visualization from dev to prod in one view * Workers you actually control (no arbitrary cache limits or runner queues)
What we handle:
* Dedicated Concourse instances on high-spec hardware (Ice Lake Xeon, DDR5, NVMe) * Worker scaling without the Kubernetes complexity * SOC compliance, auditing, AWS PrivateLink for enterprise * 24/7 support from people who've been running Concourse in production for years
The pitch is simple: Concourse is the right tool for complex CI/CD, but running it is a pain. We make it not a pain.
Long term memory for dozens of AI tools, designed for power users who want more control and flexibility than native memory systems and who do not want to be locked into any one platform. You can also have the system remember your entire chat history going back years and use this information to help you better in new chats, it sometimes makes chats 10x more useful when I say something like: “Using recall tool, do 10+ calls for 1000 tokens context each to learn about my interests, strengths, curiosities, what I’ve tried in the past, what worked, what didn’t, etc and suggest a new hobby I would enjoy”.
Without long term recall, AI is a super intelligence in your hands that uses the knowledge of the world to give you generic, nearly useless advice because of how generic it is. With long term memory, you have a super-intelligence that knows YOU. This is what MemoryPlugin solves for.
Also, my iOS apps; they're free and with no ads:
- Nonoverse (link: https://apps.apple.com/us/app/nonoverse-nonogram-puzzles/id6... ), a game about nonograms (image logic puzzles), now has 200+ levels.
- Polygen (link: https://apps.apple.com/us/app/polygen-create-polygon-art/id8... ), an app for generating low poly wallpapers and digital art, recently updated for latest iOS devices.
The core idea is a visual DAG where each transformation (filter, join, aggregate, pivot) creates a view node. Nothing materializes until you need it, DuckDB executes the full chain on demand so you can build deep pipelines without copying data at each step.
Input files can be CSV/Parquet/Excel (Excel might not work great). There's a SQL editor with schema-aware autocomplete, pivot tables with drill-down to underlying rows, and sessions can be exported as files or shareable URLs (the entire pipeline gets encoded in the hash).
Sharing can be granular and you can choose not to embed the files or if files are too big they will not be embedded and the user when opening the link will have to upload the files to restore the session.
The part I find most useful: you can replay pipelines on new data. Share your January analysis, and a colleague runs the same transformations on February's data with schema validation.
Privacy-first since files never leave your browser, it's a static website actually. I will open source soon, and make it probably MIT licensed.
Also it's a WIP and so it may be buggy (there's not even images on the homepage) https://repere.ai
A mobile app for triaging GitHub notifications in seconds. Available for iOS and Android starting next week.
Anyway, I built a way to chat with the documents in this post (updates live). https://nthesis.ai/public/702445eb-f0e4-4730-b34f-f34eb06dd6...
Or you can do a basic text search: https://nthesis.ai/public/hn-working-on
The only problem I have is that it's so effing expensive to run those games that I can't have a good number of games to claim to be any sort of legit benchmark. BUT so far the games that I paid out of pocket and ran are looking good and I think there is merit to this.
also had lots of fun building on top of Cloud Flare and solving some distributed systems problems while building this.
if you can help me run more games (for science!!) let me know!
https://blog.paulbiggar.com/full-optimizing-compiler-with-ai...
HackerNews with a better UI (same content)
Example: https://github.com/JaviLopezG Url: https://octocat.yups.me/ Repo: https://github.com/JaviLopezG/octocat
A one time secret tool https://iotdata.systems/apps/secret.cgi
A password generator https://iotdata.systems/apps/pass_gen.cgi
[0]: https://handle.antfu.me [1]: https://discord.com/discovery/applications/12117814899314524...
2. Production ready AI - mix of code and human eval
3. Understanding and building for the new agentic AI commerce
Currently on v1, but working on v2 release in the next month.
Only shared it via Show HN so far, and am still regularly getting some creative submissions. Will be sharing it at an art festival later this year so kids can have a more active role when visiting.
I'm expecting it is pretty niche, but animations tend to be very time consuming for people like me, and getting quick sprites that I can drop into a platform is a big time saver.
The project was 90% vibe-coded and I documented the tech stack here: https://www.8bitsmith.com/tech-stack
I vibe coded the book website over the holiday break - https://sector36.space/
I've been serializing chapters on Substack - https://sector36.substack.com/
https://github.com/acutesoftware/lifepim-ai-core
Only been public a few days, so please let me know if there are glaring issues.
Thought it would be a good way to learn about one form of marketing while building out some useful tools!
Looking for beta users and would love some early feedback!
[1]: https://oru.club
There’s a real-time collaborative workspace-oriented version, too.
Professionally, working on “Magic Draft,” a feature in Ditto to help designers and writers create the “draft and a half” directly in Figma, which uses a hierarchy of all your context (text, Ditto metadata, the design, your style guides, etc) to write really good starting point copy.
Now I am trying to use that model to make:
1. A post game instant replay that shows the most important/pivotal moments from the most recently finished game. Some arcades have a seperate display for observers, it could work well there, or as good filler between matches on twitch streams.
2. A personalized per tournament/yearly highlights recap.
If it works well, it might be a kind of tool that generalizes well for summarizing long twitch streams for Youtube.
some features:
- no monthly subscriptions
- location via GPS/GNSS
- a screen that hangs on my fridge (akin to marauders map, to see where the cats are at all times)
- the location data stays local always.
The tech will be extended to more products - a watch for adults, kids tracker etc. Will release here once I have all the tests completed!
I started about 3 months ago, focusing on making my 2 early adopters happy. One of them is ready to start paying soon!
Agents are codified for specific goals. Any business process that needs agent based assistance is broken into workflows and steps. Each step is assigned to an agent. Integrations (API or file access) is requested. Then user can try out, tweak and finally deeply.
The aim is to build a diy platform work configuration, tracing and evals in one place. Code generation is used internally. User doesn't need to write any code.
I'm around 99% coverage of the US. Learned a lot about land records and GIS along the way.
In the past two months I built https://xsdviewer.com to make working with XML Schemas simpler: visual structure, navigation, diffing, and faster understanding of what actually changed between versions.
Right now I am iterating on new features and performance improvements. If you regularly work with XSDs or XML based standards and have ideas, pain points, or feature requests, I would love to hear them.
It’s an open source project that basically turns your kubernetes into a developer friendly PaaS.
Just crossed 2k apps on the cloud version, no idea how many people run it locally, and thanks to a generous sponsorship from the Portainer folks, I’m able to work on it close to full time.
- Prototyping a cute little SSH-based sorta-BBS, inspired by the Spring '83 protocol, but terminal-centric rather than web-based. It's called Winter '78, and if we get another Great Blizzard this year, I'll be able to make some progress on it!
- Another prototype, for an experimental HPC-ish batch system. Using distributed Erlang for the control plane, and doing a lot of the heavy lifting with systemd transient units. Very much inspired by HTCondor as well as Joyent's (RIP to a real one) Manta.
A site for filtering word lists and solving word puzzles
Anagrams, regular expression search, and a crossword helper, as well as several NYT word games.
Planning to support a self hosted version soon -- if you'd like to give it a try ping me,.
I moved last year and started a bunch of DIY projects, and each one turned into a bunch of drawings on napkins and backs of receipts to figure out how much wood/material I needed to buy to avoid "one more trip to store". It tells you how much of a stock material you need, and shows you how to cut it, to minimize waste or minimize cost.
A few details
1D: works for boards, pipes, bars, etc. You enter lengths and quantities; it details the cuts onto the stock material.
2D (coming soon): will work for plywood and sheet goods and lets you see how your parts tile onto sheets.
Apparently the software engineer to woodworking pipeline is actually just a circle
Free, holds your keys in localstorage and makes direct calls to the APIs (unless there's a CORS issue), at https://evvl.ai if you want to try.
I'm thinking of reviving my Python SQL parser prototype I have half done. Or maybe resume my Mako template plugin for PyCharm.
TentFires is a variant of the puzzle game Camping / Tents-and-Trees, but with a huge overworld.
Creating the overworld led to some advancements in topology, which led to realizing those advancements can be used to accurately reconstruct a theoretical "metal die shape" that was used for a glyph to creation impressions in old letterpress books.
As a result, my current first application of that is to remaster and re-release the first book of the "Hardy Boys" series from 1927, "The Tower Treasure".
To do that, I've constructed a "macrogrammetry rig", which is essentially a 2d x y panning machine using 3d printer parts and stuff from my local hardware store, and a camera with a macro lens, in order to "scan" the pages of the book at the highest possible resolution I can reasonably do so at, which is currently around 6000dpi.
In my old app, I was solely tracking the streets, but it became somewhat impossible to have pretty coverage after I moved cities. And it's more fun to see global coverage anyways. Over the past couple of years, I've been getting emails for similar features that would expose city/country wide stats as well. Going to share with the users of the old app once I port its features on this one.
Still actively polishing it over the weekends, but it's very fun to see all the places I've been to on a hex-based map, combine it with my friends' data, and so on. It's been a very fun project so far.
I'm not sure these projects will ever "go anywhere," but at the very least I'm honing my craft as a programmer. I've learned so much, and I have so much more to learn.
---
I've been building it with the agent sdk and any time I want an additional skill, I create it
Examples: parse this pdf containing my credit card bill and add all transactions
Given it has a db, I've been using it to save notes, ideas etc.
Been fun
I'm rewriting from scratch : https://simplew.net/v26/
I'm building a system that reads Slack, listens to Google Meetings, user complaints, etc and gives me prompts I could feed into coding agents or planners.
Problem-to-prompt seems like a larger obstacle than coding these days, I wonder if it's solvable, and if solving it makes cheaper coding agents viable.
I wrapped this up a while ago, but sharing because a few friends found it useful.
https://quickfit.dre-west.workers.dev/projects. (click "signin as guest" -- the signup doesn't work so don't worry about that. a friend added it so I could save and access on different computers but we ended up not finishing it)
for example, RecolorLife.com and Headshoti.com generate around $800 USD.
Now I will expand for real estate.
During the December break, I have implemented some new features: automatic stable refdes annotations, parameter assignment rules for easy part number assignment and some ERC rules. These are also important parts of the design workflow to help turn a schematic into a usable BOM and layout.
I have created a usb-uart converter board with the CH340 chip. The complete schematic was coded with Circuitscript and then imported as a netlist into kicad pcbnew to do the pcb layout. The design was produced with JLCPCB and after receiving the boards I tested them and they do work! The design files are here https://github.com/liu3hao/usb-uart-bridge. The circuitscript code file is here https://github.com/liu3hao/usb-uart-bridge/blob/main/usb_uar... and the generated pdf from the circuitscript code is here: https://raw.githubusercontent.com/liu3hao/usb-uart-bridge/re...
The motivation for creating Circuitscript is to describe schematics in terms of code rather than graphical UIs after using different CAD packages extensively (Allegro, Altium, KiCAD) for work in the past. I wanted to spend more time thinking about the schematic design itself rather than fiddling around with GUIs.
Please check it out and I look forward to your feedback, especially from electronics designers/hobbyists. Thanks!
Using existing component libraries via some sort of importer code looks like it might be needed to gain momentum.
For the past several years I would look up the day lengths and sunset times for my location and identify milestones like “first 5pm sunset”, “1 hour of daylight gained since the winter solstice”, etc. But that manual process also meant I was limited to sharing updates on just my location, and my friends only benefitted when I made a post. I wanted to make a site anyone could come to at any time to get an optimistic message and a milestone to look forward to.
Some features this has:
- Calculation of several possible optimistic headlines. No LLMs used here.
- Offers comparisons to the earliest sunset of the year and shortest day
- Careful consideration of optimistic messaging at all times of year, including after the summer solstice when daylight is being lost
- Static-only site, no ads or tracking. All calculations happen in the browser.
Turns out there are a lot of words and some are more useful than others!
Find local businesses with no websites (or check bad ones with built in SEO tools).
Build contracts, create billable invoices and track tasks for clients with a lightweight, web dev focused CRM.
Essentially, an all in one platform for web developers to find and manage their clients
EDIT: It's only on iPhone and iPad for now. Android version coming soon.
I often find myself losing important conversations/chats when talking to ChatGPT over time. So I created an extension that enables you to bookmark conversations within the UI. Just hover over any chat and click on the save button. Hope you find it useful.
Most software engineering interviews are a “signal crisis”. They measure recall, not real-world engineering.
I'm working on moving the evaluation from “writing code under pressure” to technical judgment via structured code review.
InfoCaptor — https://my.infocaptor.com [YouTube video insight extraction tool]
MockupTiger Wireframes — https://wireframes.org/ai-wireframes AI wireframing software for product ideas
CrawlSpider — https://www.crawlspider.com/product/internal-link-building-w... WordPress internal linking automation platform
Vizbull — https://www.vizbull.com custom photo collages (https://vizbull.com/collage-maker)
Wrote an introduction to Obsidian referencing other relevant posts, but also keeping it simple.
And the post before that was about creating an app using SvelteKit + Capacitor.
Currently working on some posts about AI coding and my life in Osaka after 3 months here.
Other things I'm working on:
- https://dailyselftrack.com/ - Got into working on it again, mainly solving some UX problems currently.
- https://game.tolearnkorean.com/ - Learn Korean words quickly, words go from easy tasks (e.g.) matchings pairs) to more difficult ones (writting it), currently still needs some slight adjustments, and then I'll release an Android version.
- https://app.tolearnjapanese.com/ - Wanted to learn Hiragana quickly, used my existing project as a base to build this. Needs some adjustments as well, feedback is highly welcome.
- https://tolearnkorean.com/ - Since I'm learning Korean, and also working on an app to better learn Korean, I also want to make a guide on learning Korean, improving my own skills by teaching others.
Thanks for sharing
- 1 time when matching pairs, two different sounds played. Not the automated and presumably yours, but two different sounding sounds. E and another one I think it was.
- One run the male voice played when I clicked on the Japanese symbols to the left when matching pairs. All the other times it played together with the automated voice.
- The error log empties itself despite not leaving the current run, so you can't see what you did wrong. Not sure if it's page specific, but if so it ties into this next point.
- When matching one symbol to four different sounds, the page transition is too abrupt. This is also a thing on the other pages. I'd say better UX here is to give feedback that you did it right, and let the user choose to continue. A score too, if you'd like to "gamify" it.
- The options for modifying runs doesn't work. I was interested in seeing the other tasks, but even when only enabling the task or tasks I wanted, I kept only getting match the pairs and match the symbol to the sound.
Regarding the duplicate sound, which browser & device did you use?
“Protein Data Bank-in-a-Box”
Also, trying to finish a PhD on machine learning when you want to minimise a p-adic loss.
“Protein Data Bank-in-a-box”
Also open-quiz-commons, the mcq dataset that powers the above quiz app: https://github.com/prahladyeri/open-quiz-commons
* Schematra https://schematra.com/ - a web "framework" written in CHICKEN scheme
* Lots of (unpublished, but will try to do so soon) eggs that spawned from building schematra & KartInsightsPro
* llm.scm (inspired by ruby's llm gem)
* imgui.scm
* aws.scm (support for core AWS services like SSM, S3, other APIs)
* umami.scm
You get the idea. I started playing with CHICKEN to scratch the itch of building something in Scheme and I couldn't stop. Using ast-grep as a skill in claude code makes it a lot easier to edit code as well.Edit: format
Recently added role sync for Saml, oidc and ldap. These auth protocols are complicated though.
input <- old_image new_image
output -> report
Example summary:
================================================================================
TOTAL
================================================================================
-20 insns, -40 bytes (c.insn→insn +40b) [spill -22, cmov +3, call -2, br -1] mem +13, alu -6, mv -4, bitmanip -1
Added/Removed: +0 / -220 bytes
Functions: 4 better (-20 insns), 0 worse (+0 insns), 1 removed, 1062 unchangedSo I've been building a distributed AI inference platform where you can run models on your own hardware and access it from anywhere privately. Keep your data on infrastructure you control, but also leverage a credit system to tap into more powerful compute when you need it.
We're planning a closed beta in February for folks on the email list – testing the core flow of running your own node, accessing it remotely, and participating in the credit economy. Looking for developers and teams who want to kick the tires on this before we open it up more widely.
If you're interested in the beta: https://sporeintel.com/
https://github.com/sastrophy/siteiq
I am a student and fascinated by physics and cybersecurity. So to understand more about the field of cybersecurity and web-development, I started this project few months back. Any feedback from all of you will be really helpful in taking this project forward.
The main idea is to reduce the friction of using multiple tools by putting everything into a single workflow. You can create visuals from text, generate background music by mood or style, and turn ideas into short videos without switching platforms.
It’s still early, but I’m actively improving the product and adding more creative features.
If you’re curious, feel free to check it out https://palix.ai/
I've been super hyped about it. My main goal at first was supporting `navigation.canGoBack` and the `currententrychange` event simply so that I could implement perfect back buttons with (almost) zero edge cases. This is surprisingly very tricky with the older History API.
Today I just opened a PR with support for `entries()` and `currentEntry`: https://github.com/kcrwfrd/navigation-ponyfill/pull/33
Once that lands I'll be able to start thinking about implementing full-blown navigations with it, with cool stuff like navigation interceptors and whatnot. That would actually enable framework router implementations wholly based off the Navigation API, rather than simply patching some enhanced capabilities for use with a router based on history.pushState / history.replaceState.
Here is the current list of features:
- Unified inbox: Connect multiple Gmail accounts and see all your emails in one inbox. No more switching between different tabs or accounts.
- Email bundles: Automatically group your emails together into bundles, so if you have many emails from the same sender, they'll no longer clutter up your inbox.
- Email summarization: Save time by getting summaries of your email threads.
- Keyboard shortcuts: Navigate through your inbox using Gmail-style keyboard shortcuts.
- Automatic dark mode: All your emails are converted to dark mode automatically.
- Block email trackers: We detect and block email trackers to protect your privacy.
- Privacy-first: We sync your emails from Gmail and cache them locally on your device - they are not stored on our servers. We don't train on your data or use AI providers that train on your data.
Lots of other features coming soon, like split inbox, AI search, etc. It's in early access now, would love feedback from anyone interested in checking it out!
I'm also curious to hear - what is the most annoying thing about email that you have to deal with today?
Most recently, we added support for benchmarking and us stocks, etfs etc.
Benchmarking is an interesting topic. I am not sure why but most personal finance tools are quite bad at this. Our benchmarking tool allows you to understand how your underlying portfolio is doing, vs how your timing decisions are fairing. Portfolio comparison is done with NAV chart (nav calculation is same as what mutual funds use). And value chart in comparison with nav chart, helps you understand if your timing decisions are helping.
NOTE: you can try demo without signup, but it doesn't work in Firefox Incognito mode.
It started as a way to fix audio drift in my multi-room setup without using large jitter buffers, but it turned into a standalone project where I started learning more about NIC driver latency. I found that if you isolate a specific CPU core (isolcpus), disable all interrupt coalescing and busy-poll the RX ring buffer with a custom AF_XDP socket, you can characterize the PHY and PCIe bus latency jitter to within a standard deviation of few ns on generic realtek hardware.
I'm using this disciplined clock to do TDOA (time difference of arrival) triangulation of RF signals inside my home. I have three anchor nodes running this stack connected to SDRs. I can currently track the physical location of my dog's collar to within 15cm in 3D space by correlating the signal peaks. The hardest part is writing the solver for the multipath interference. I'm implementing a custom unscented kalman filter to reject the signal reflections bouncing off my refrigerator and radiators. I know what you're thinking. Yes, it sounds totally excessive but getting sub-microsecond synchronization without an FPGA switch feels like magic.
Its been a while and I don't remember all details but I was trying to synchronize two B210s running on different systems as NR BS and UE.. GPSDOs were too costly and I did not have external PPS. Regardless of what I did, after initial sync the two sdrs would drift by significant amount within couple seconds. I just gave up on solving the problem on hardware and adjusted for timing and freq offset every SSB, and assuming everything is synchronized for two frame duration (2x1024us). I moved out of that project and I still don't know if there's a better way or how mobile phones actually implement RX..
https://bittorrented.com a torrent-first Streaming platform
http://marksyncr.com a free bookmarks synchronizer web extension for chrome safari and Firefox based browsers
http://defpromo.com a zero-cloud self promotion web extension for all browsers helps automate commenting on social media posts to promote your product api keys required
http://coinpayportal.com a non custodial crypto payment gateway Easy to integrate similar to stripe with web hooks and manage multiple businesses
Have you considered other pricing models? Maybe buying credits, or them rolling over?
https://feedbun.com - a browser extension that decodes food labels and recipes on any website for healthy eating, with science-backed research summaries and recommendations.
https://rizz.farm - a lead gen tool for Reddit that focuses on helping instead of selling, to build long-lasting organic traffic.
https://persumi.com - a blogging platform that turns articles into audio, and to showcase your different interests or "personas".
The game engine I work on (link in bio) developed a custom language for engine customization, but it never had any runtime debugging capabilities. I've been adding that over the last couple weeks.
The language compiles to a bytecode, which is what the engine runs. But the bytecode has no source information.
The easy part was getting source locations for stack traces (associating each generated bytecode with an originating source line of code).
Harder was getting scopes information for displaying function names in those stack frames, and for associating runtime variables per-scope (stack variables, globals, class fields, etc.)
Now I just need to create an expression parser for evaluating expressions at runtime based on the current scope, add breakpoints and step over/into/etc. capability to the engine, then wrap it all together in a VSCode extension.
https://web.zquestclassic.com/zscript/ (this doesn't show what I'm working on, it's just a basic web text editor to quickly test the language. can't run it from here).
If you love reading, want to have an easy way to access your collection, and share it with a few people, this is for you.
https://github.com/colibri-hq/colibri
I’m looking for contributors who would like to help shape Colibri into a counter-approach to big tech; it’s specifically meant to never get monetised, and explore what such software can become.
I have been working on Wait: https://github.com/tanin47/wait, a self-hostable CORS-enabled headless waitlist system that connects to Google Sheets.
I have many landing pages hosted for free on Netlify and Github Pages as static pages. All of them have waitlist forms that send cross-domain AJAX requests to the Wait server, which then writes the emails to Google Sheets. Since there's no iframe, it's easier for me to style the form and customize the after actions.
The Wait server is hosted on OVHCloud for $4/month. It's probably the most economical option for a waitlist system.
I guess one advantage here is that the user is not locked into a specific hosting provider.
Any alternative that is hosted or uses iframe will encounter this kind of frictions.
In comparison, with Wait, you'd just call `fetch(...)` and do whatever you need after `fetch(..)` succeeds or fails. For example, one landing page might say thank you afterward. Another landing might show the installation instructions after the user submits their emails. The whole code is controlled by you.
It's like you call your own backend except it's hosted in a different domain, and your landing page can be hosted as a static site with no backend.
If you are interested in trying it out, I'd love to work with you to make it successful for you. Thank you!
I did one that writes to a Notion DB and is hosted for free on Cloudflare. I don't think I bothered to open source it though.
Moving from auto tools to Zig’s build system was a huge quality of life improvement. Most of the work since has been deleting features and tidying up the code to make it easier to reason about. I realise there may not be that much value porting C to Zig the more I work on the project. Instead I see is there’s huge value in the usual stuff: clearly defined functions, preferring stack to heap allocation, minimising global state.
But it’s still quite early days. I’m looking forward to porting the trickier network code to Zig and using its built-in testing.
It's been approx. an year since we are up and running, and we helped 100+ businesses in the US and world-wide to understand the value and savings of automated route planning, and prepared tens thousands of optimized routes for companies operating from 1 to 50 vehicles. All this keeping our operational expenses low and flat, thanks to our local-first route optimization engine working in the browser, and reliance on OpenStreetMap.
It's the Survival Crafting RPG you would have played on your Apple II back in the 80's.
You can think of it as Valheim's gameplay crammed into the tile-based ui of the old Ultima games.
It has a procedurally-generated open world with towns and NPCs to talk to, all the resource gathering, mining, crafting stuff you'd expect in a modern survival game, and some good old fashioned dungeon crawling to boot.
I've been working on it off and on for the last several months. Let me know what you think!
Just wanted to let you know that for me, on FF and Chrome, your blog is rendering rgba(235, 235, 235, 0.64) text on white BG, and I'd really like to be able to read it.
Edit: Also immediately reminds me a bit of UnReal World[1], in a good way
And thanks for the rgba value. It looks like a vestigal remnant of a dark mode theme that got pulled along over the years. A fix is working its way out to the server now.
You might even say a Obsidian alternative
The TL;DR:
- Currently in free Early Access with 18 competitive mini-games.
- Players use their mobile phones as controllers (you can use game pads as well!)
- Everything is completely web-based, no downloads or installs are necessary to play
- All games support up to 8 players at a time and are action based, with quick ~one minute rounds to keep a good pace. This means there are no language based trivia or asynchronous games!
- In the future the plan is to open up the platform for 3rd party developers (and Gamejams!) as well. We'd take care of the network connectivity, controllers etc.. 3rd party devs can focus on developing cool multiplayer mini-games without spending an eternity with networking code and building the infrastructure.
The core idea is to make it easy to capture page content, highlights and images and everything will be formatted as Markdown and is easily editable in a performant little editor window.
Then from there, I’m working on multiple export services like Clipboard export and popular PKM apps like Obsidian, Roam Research, Capacities etc.
Currently experimenting with semantic diffs for the merge conflicts editor: https://codeinput.com/products/merge-conflicts/demo
You can try by installing the GitHub App which will detect PRs who have a merge conflict and create a workspace for them.
Hey all,
Looking for feedback on my site!
Feel free to roast the concept, UI, features - all fair game.
- Ben
RespCode puts you back in control with 3 orchestration modes:
Compete Mode — Send your prompt to Claude, GPT-4o, DeepSeek, and Gemini simultaneously. See all 4 solutions side-by-side, compare approaches, and pick the winner. You're the judge.
Collaborate Mode — Chain models together in a refinement pipeline. DeepSeek drafts → Claude refines → GPT-4o polishes. Each model improves on the last, with full visibility into every stage.
Consensus Mode — All models generate independently, then Claude synthesizes the best parts into a merged solution. Democratic code generation.
But here's what makes it actually useful: Every piece of generated code runs instantly in real sandboxes — not simulated, not mocked. We support x86_64, ARM64, RISC-V, and ARM32 architectures. You see compilation output, runtime results, and exit codes in seconds.
Why "Human in the Loop" matters: AI models are powerful but imperfect. RespCode doesn't hide this — it exposes it. When you see GPT-4o produce clean code while Gemini's version has a bug, you learn which models excel at what. When Collaborate mode shows how Claude fixed DeepSeek's edge case, you understand the refinement process.
You're not just accepting AI output. You're supervising, comparing, and deciding. This is how I believe AI development tools should work — transparent, multi-perspective, and always keeping humans in the decision seat.
Here a detailed blog on why this could solve your coding problem - https://respcode.com/blog/why-single-model-ai-assistants-hol... Would love your feedback!
Stack: Bun, Next.js, SQLite, Kysely. Been building it almost entirely with Claude Code which has been a surprisingly effective workflow for this kind of infra tooling.
It might be trivial for some programmers to use puppeteer or some would want to hand-build some OG images (you can mix and match), but otherwise, this tool is very handy, and set-and-forget.
I just created a Show HN for it https://news.ycombinator.com/item?id=46585378
Lots of work to automatically filter and process scraped footage into something that will train well.
As a layperson, I find this an approachable way to get an overview of a topic. However, I am only interested in a few select topics, and I was not able to find a way to subscribe to specific ones, such as #insulin-resistance (topic request ;-)
Another thing I really value in science YouTubers (e.g., youtube.com/@Physionic) is the deep dives they offer into the research—highlighting conflicting results, paying special attention to meta-analyses, etc. That would be amazing, although I realize it may be too much to ask.
Adding personalised highlights of topics of interest into the monthly emails is my top priority for 2026. Won't be too complicated, just have to set up the right API back-and-forths etc.
Good point about deep dives. I could do that for the topics I'm an expert in, however, by definition, I think this requires domain expertise and is therefore not scalable nor to be automated easily. I'm not sure if this fits the scope of the current project as I want people to find the papers of interest, read a quick summarization and then encourage them to read a selection of papers themselves.
It’s a modern way to navigate and find home care providers in Australia. We built it after working in the industry and seeing families struggle a lot with navigating the complex aged care system (at a time in their lives which is already very stressful).
It’s very early days but it has been very enjoyable to think back on all the issues and pain points we saw in our careers and build solutions around these.
Hopefully it helps some Aussies and their parents with staying in their own home, and in feeling more confident with the decisions they make through the journey.
The stack is: Nuxt for the front-end, Postgres, and FastAPI for the backend. We set up Zoho instead of Microsoft or Google Workspace and it’s been very smooth so far (and more cost effective).
One fun part of the dev work has been learning all the magic of PostGIS - if you ever have an idea that involves geospatial data or maps, go ahead and try build it with PostGIS in the db.
... so I'm building an open source version.
Track all your trades in Excel, and get Sharpe ratios, Sortino ratios, or even pass it on to an LLM to have it recommend trades based on news feeds.
Planning to open source it in the next week or two, once I add the proper tests and docs! :)
GitHub: https://github.com/valbuild/val
Intro video: https://youtu.be/83bnYGIsm5g?si=5LN7dxnARrS4jNEx
What sets it apart is that it 1) stores content in TS / JS files 2) is a fully fledged CMs. It is designed to be nice to work with from the start a project (and the structure of your app changes all the time) -> when everyone works on individual PRs -> to the end when the project is decommissioned.
It needs no cloud APIs, no DBs nor caching. No query language to learn. No sign up to get started. It is fully TypeSafe and needs no type generation. You can rename and refactor content from you IDE. It works amazingly with Cursor and friends (local content and schema + strong typesafty + validation)
Currently reqs are: Nextjs and GitHub.
APIs are pretty stable. UI is in the process of a revamp. Will do a proper show hn some time in the near future.
It's a macOS desktop application that uses browser agents to update your old and compromised passwords.
It started off a side project for myself after running into a compromised password email. Since then, I've expanded it into a macOS app + chrome extension for navigation. It's been so much fun building this application, learning about AI agent management while enforcing security/privacy best practices. I've re-written this app 4 times from scratch before launching it a couple weeks ago. Please check it out and let me know what you think!
I’ve always loved making games but had the pain of rebuilding the same systems every time. Wanted my own editor with reusable components I could iterate on quickly. Most no-code tools felt too limited, so I built my own.
This week I shipped something I’m really excited about: a visual event system (think Unreal Blueprints / Unity visual scripting). It encapsulates the full UI editor—which wasn’t easy to navigate—into something way more approachable. Physics, combat, inventory, dialogue, pathfinding, all wired up visually: https://youtu.be/8fRzC2czGJc
Every game is multiplayer by default (WebSocket with client-side prediction, zero server config).
55+ systems under the hood, but the visual editor is now the main way in. I use it for my own games and improve the editor as I go. Turns out it’s also useful for asset creators who have beautiful sprites but no way to ship playable games, and devs who want to prototype fast without setting up infrastructure.
Some examples:
∙ Survivor-style: https://craftmygame.com/game/e310c6fcd8f4448f9dc67aac
∙ Platformer: https://craftmygame.com/game/f977040308d84f41b615244b
Stack: Next.js, Three.js, sockets.
Challenges now: I believe people can build complete games with it, but now convincing people they can do games with it isnt easy. I also try to get people who want to have games without having to do them so I can continue the project.
If you are curious, and want to give it a spin, it is Totally free. Feedback welcome, especially on first-time experience
1) Addition of modules that you would want to use directly during your testing (think encoding, encryption, request / response manipulation)
2) Ability to extend the GUI of the application. This was challenging as Marasi the proxy is meant to be used as a library and the GUI is a separate application. Extensions run on the proxy and not the GUI application. Right now I've settled on having the GUI inject its specific render and update methods to handle that bridge.
You can check it out at https://marasi.app or the repo directly (https://github.com/tfkr-ae/marasi-app) but I would recommend download the latest development release as it includes a fix for the UI locking up of there is a lot of traffic going through the proxy.
I always struggle to share files between my devices, or to navigate them. Why do I need servers, or dropbox or wetransfer?
Inspired by croc, rclone, syncthing and magic wormhole, I'm close to releasing KeibiDrop as MPL2.0.
It has a nice slint.dev GUI, works cross platform on mobile, + desktop (via FUSE or no-FUSE), and has post quantum encryption at transport level.
No clear monetization path, but I also tinker with unikraft in order to host a relay server (for key negotiation, or other things) as a unikernel cloud function.
The problem I’m trying to solve is how fragmented and emotional this decision usually is. Instead of blogs and anecdotes, the tool focuses on side-by-side comparisons of safety, cost of living, quality of life, visas, and taxes, so people can reason about tradeoffs more clearly.
No recommendations, no push to move. Just structured information to reduce uncertainty around a high-stakes decision. https://newlife.help
I also wanted to use the opportunity to mostly vibe code it (I got a month free Pro from OpenAI, so heavily used Codex). I also wanted it to be hosted for free so built it around Cloudflare Pages, Workers and KV store. It’s basically a mobile-first weekly view that shows “what’s happening this week” for the whole family. I also wrote a small series about building it and it shows my LLM workflow with the guardrails using AGENTS.md etc.
Write-up here if anyone’s interested: https://michaeldugmore.com/p/family-planner-vibe-coding-rule...
Github repo here: https://github.com/mdugmore/family-planner-public
To goal is to make you write more. Tonnes of features, including posting by email (my favourite way to blog).
I love blogging and I want more people to go back to writing on their own blog and reduce time on the socials. It seems to be striking a chord with people.
Free classic plan, bargain premium plan at only $29/year.
Source available (Rails) at https://github.com/lylo/pagecord
“RSS. Nothing more, nothing less”
* Upload a GPX file -> see the route, map and key stats.
* Store every hike, bike ride, walk, or trek in one place.
* Think of it as a “personal notebook” for families, not a leaderboard.
I built it to keep the stories of my own kids’ adventures. It’s still in early‑stage development - if you’d like to test it out or share ideas, drop me a line!
One suggestion - in the demo mode it would be nice if you provide a sample GPX file which you can try right away.
Thanks for your interest and the suggestion. I added it to the demo mode.
There was a bug in the PWA, you need to unregister the service worker in your browser otherwise the App will never update on your device.
The goal is to take a source video, generate translated speech in a consistent voice, and keep timing close enough that it feels natural. I'm planning to open source parts of it in 2026.
If you do localization / run a YouTube channel / ship training videos, I’d love feedback on where current AI dubbing fails most! Not only on my service, but others as well.
Also, this Christmas I took a rest on Pulso, and I developed a small app to monitor the version & Support Lifecycle of large dependencies. https://stacktodate.club
So I added a weekly poll. I pick a stock, show some key facts, and let people vote and discuss whether it’s overvalued or a hidden gem. Takes a few minutes, usually learn something.
I got tired of seeing people lose their accounts to "unfollower" apps that require login credentials and use unofficial APIs. Instead, I built this to parse official Instagram GDPR data exports 100% client-side in the browser.
It’s a Vue 3 + TypeScript SPA. There is no backend; all the ZIP extraction and JSON parsing happen locally so the user’s data never leaves their machine. I even added a "Security Audit" feature to help people find suspicious login activity in their own data.
My biggest challenge right now is the UX friction. To stay safe, the user has to navigate an 11-step manual export process on Instagram to get their data. I’m trying to figure out if the "privacy/safety" benefit is enough to convince non-technical users to jump through those hoops.
The problem that I am repeatedly facing is that I am trying to build a home server and I keep asking chatgpt questions, but it is hard to keep all the little details in one place. The way I see it is that I can just text my assistant bot and ask it something like "hey, can you research which NAS setup would be the best for me given x and y". It will offer some setup and I would say "can you add it to the plan" and "can you plan the next steps for me?". The bot will also update the knowledgebase and version control it.
You might also want to use it for something like planning a trip to Paris, where at some point you might say "hey, given my schedule, can you squeeze a tour to top5 croissant places in the center of Paris".
The whole thing sound really vague and sounds like something solved long ago, but I cannot find solutions that will be guaranteed to stick to a very precise plan that I can review at any given moment. If you happened to know existing solutions, please let me know. I really don't want to build this thing.
It takes in your calendars and gives you time slots where you can put your head down and work! It built it for myself, but a few people found it interesting, so I thought of publishing it.
If you are someone who manages multiple calendars and multiple calls per day, then you will find this useful!
It's already working, but it requires so many tweaks and adjustments that make the project hard to finish-finish.
It's controllable by an ESP32, can run automated cooling benchmarks (to find the power vs temp sweet spot) and is pretty much all made out of metal, not 3D-printed – I've learned a ton about working with metal, especially around drilling, cutting, and tapping/threading. Who knew precise drilling a solid copper block could be so tricky at times (saying this as a person who has never drilled anything except wood/concrete before)!
I also like the idea of doing elaborate ascii art with a type writer.
Currently just the cloud hosted version (https://gitncoffee.com), but plan to open-source most of it when it's more complete.
If you want to see a repo example, https://gitncoffee.com/gitncoffee/git-demo
The next step is to improve the landing site. I like the current no-frills design, but unfortunately I think I'm in the minority.
Feature-wise I'm planning to add support for getting alerts via Slack. I'd also like to make use of certificate transparency logs, alerts to start with but maybe for other stuff as well.
The stack is C#, .NET 10 and PostgreSQL 18 running on Hetzner VMs. I also self-host Forgejo and OpenSearch on Hetzner VMs. I recently switched from VS Code to JetBrains Rider and I think the latter is much better for C#. (I'm using Linux so Visual Studio is not an option for me.)
It helped me stay more consistent with IF - added visual indicator of previous fasts in a little calendar widget on the dashboard so every time I open it, I see my streak visually, which has been a great motivator to continue.
I also added a BMI calculator, as the NHS one I had to enter all my values from scratch each time I went, so got a bit annoyed :)
I think I'm still the only user, but it's given me a lot of value as I beat my plateau!
The original dashboard was written in a hurry. It’s solid and effective, but clunky. It was written in UIKit, and uses the same backend as the main app.
The new app is being done in SwiftUI (with all its strengths and weaknesses), and has a new, streamlined, backend. It’s much faster and simpler than the original one, but will have to give up a couple of features, due to SwiftUI’s limitations (but it also adds one).
This project is also a test of LLM integration into my workflow. So far, it’s been quite helpful.
It looks inside each file to see what it’s about, then suggest the right folder for you.
Everything happens on your Mac, so nothing leaves your computer. No clouds, no servers.
It works in 50 languages (including English, German, French, Spanish, Swedish) and with images (OCR and object recognition), PDFs, Microsoft Office, ePubs, text, Markdown, and many other file types.
Next, rename files based on their content (e.g. 123023dfawjher.pdf → finance_chart_fy26.pdf).
For messy folders anywhere on the Mac, Floxtop can help.
Combines AI review (Ollama/Claude/Gemini) with static analysis in one place.
- Multi-LLM support (can run locally with Ollama),
- static analysis,
- GitHub/GitLab webhooks.
- Developer pulse and leaderboard to gamify reviews
- LLM cost monitoring to avoid surprise bills
Nothing fancy, just something I built for my own use and decided to open source.
Feedback welcome.
https://code.visualstudio.com/blogs/2017/02/12/code-lens-rou...
100% vibe coded WASM based. It runs Doom so far.
I'm building that because I'm building a free-to-use vibe coding platform (ala Lovable) but the entire world off all apps is too hard to one shot with the current set of free LLM models... So now I have something it can target.
I've spent a lot of time with free models because I use them heavily in my VibeProlog project (https://github.com/nlothian/Vibe-Prolog/), which is actually in pretty good shape now. Next step is to rebuild the parser so it can load libraries in a somewhat reasonable way.
The pilot is being released this month in partnership with the newspaper of record and local radio stations that cover four rural counties in the New York Catskills.
It's a modular platform that wraps current workflows, rather than replace them. If the pilot shows promise, we'll move forward with offering it to other regions using the same type of partnership model.
Here's some of the features it includes:
News & Media: Articles, Newspaper Archive, Videos, Photo Albums, Breaking News, Weather, Aggregated RSS Feeds, Local Radio Stations
Community: Events Calendar, Local Forums, Polls, Local Connections, Private encrypted messages.
Print-to-Digital Ad Integration: Sales tools for small publisher ad sales teams. Ad project workflow including designers. QR Codes link to SEO-friendly, shareable digital companion ads
Marketplace: Local market with listings that contains one or more offers. Cart and checkout with payment processing. Baked in discounts and rewards. Digital ads can link directly to marketplace offers.
AI Automation Tools: Wrap existing workflows rather than replace them. Operational efficiency gains.
Its easy to spin up customized apps for local communities and the stack also includes an API for integration into existing apps.
All of it is hosted on local hardware in the service area.
Why? Because why not? I love CAD.
It's still very much in the early stages because it's just a side project for me, but, it is possible to draw lines and circles, copy or move them, delete, snap, create text, and a couple of other basic stuff.
I wanted to have a more difficult project and learn more about AV and Liquid Glass. I try not to vibe-code. I keep it 100% free for now, I'm looking for power users and retention. I won't monetise until I feel that this is a helpful tool.
https://apps.apple.com/us/app/kinevision-video-coaching/id67...
Build Enterprise-grade applications with real backends - APIs, databases, auth, and business logic. Secure, scalable, and ready to launch in minutes.
Imagine v0, Lovable, but it actually does real backend that scales. Not prototypes. Real production quality code. The philosophy is simple - Algorithms first, AI second. Opinionated code always.
AI is decent at following instructions, but bad at being consistent, reliable. Fortunately, those problems are solved by algorithms and opinionated architecture.
The codebase generated uses Elixir for backend, Svelte for frontend. It uses best practices and opinionated algorithms to consistently produce reliable code that works the first try, doesn't waste your tokens and generates exactly what you want.
You also get finer controls - drag and drop visual designer and an ERD diagram editor (if you really want to architect the backend yourself). Code export is available as well.
Edit: I got tired of AI models trying to write code that never ends up being production worthy and always required a complete re-write. I wanted to build an interface where users could "vibe code" an entire production grade app where everything end to end is taken care of - including hosting. As an avid startup fanatic myself, I love building Saas products and thought something like this would allow me to go to market faster and test the waters.
Thanks in advance for trying it out!
Got frustrated with Apple's Fitness app being too basic and third-party apps creating anxiety with made-up "recovery scores." Wanted something in between: proper training load tracking without the daily judgment.
Uses 42-day EWMA for chronic load, 7-day for acute. HR-based rather than rTSS since most people don't know their lactate threshold. Zone breakdowns, route comparisons, the stuff that actually helps you see if your training is working.
Free, no ads: https://apps.apple.com/app/id6749277384
Would love feedback from anyone who trains with an Apple Watch.
Main Projects: 1. cyberbrain ( https://github.com/voodooEntity/cyberbrain ) It is a golang based architecture to write event/data driven applications. It is based on an in-memory directed graph storage ( i also wrote https://github.com/voodooEntity/gits). The point of the system is that instead of writing code where A calls B calls C calls D .... you define single "actions". Each action has a requirement/dependency in form of a data structure. If a structure is mapped to the graph storage, it will automatically create singular payloads for such action executions. The architecture is multithreaded by default, meaning all "jobs" will be automatically parallel processed without the developer having to care about concurrency. Also, since every "thread/worker" also does "scheduling" new "jobs", the system scales very well with alot of worker.
Why? Well it mainly developed this architecture for the next project im listing.
2. Ishikawa : an automated pentesting/recon tool Ishikawa does not try to reinvent well established pentesting/recon tools, instead it utilizes and orchestrates them. The tool consists of actions that either do very simple things like resolveIPFromDomain , or actions which utilize existing tools like nmap, wfuzz, etc.. - collects the info in the central graph and at the end you get a full mapping of your target. Compared to existing solutions it does alot less "useless scans" and just fires actions which make sense based of the already gathered data (we found a https port, we use sslscan to check the cert...).
3. Gits (as mentioned above) a graph in memory threadsafe storage. While i don't plan to many changes on it, it has been developed for cyberbrain so if i need any additions ill do them, also planing to reenable async persistence.
Regarding ishikawa: while im still working on this project, it may be that i will shut it down. I had a rather expensive meeting with a lawyer that basically told me that open sourcing it while beeing a citizen of germany would just open up potentially ALOT of trouble. Right know im not sure what the future will bring - i basically spend 10 years developing it starting with gits, than cyberbrain to finally build the tool i was dreaming of. Just to hide it on my disk.
Sideprojects:
1. go-tachicrypt ( https://github.com/voodooEntity/go-tachicrypt ) It started as a fun project/experiment - a very simple CLI tool which allows to encrypt file(s) / directory(ies) into multiple encrypted files so you can split them over multiple storages or send them via multiple channels. Im planing on hardening it a bit more and giving basic support.
2. ghost_trap ( https://github.com/voodooEntity/ghost_trap ) A very small project i recently put out, nothing to serious but kinda funny and maybe usefull to one or another. It provides - An github action that will inject polymorphic prompt injections to the bottom of your README.md so LLM scrapers may be fend off - An javascript that will inject polymorphic prompt injections into your html so more sophisticated crawlers like google etc which emulate javascript also may be fend off
While working on alot of other stuff, these are i think the most relevant.
A bit like DraculaTheme.com
https://tomaytotomato.github.io/jensen/
https://github.com/tomaytotomato/jensen
The plan is to get a Chrome and VSCode theme shipped this week.
"I never asked for this...."
Now I've been piecing together a complete desktop environment using niri and many other tools. If you're using Arch, you can opt-in to turn this feature on. There's an install script which can be run on a new system or existing system, documentation on each package and more at https://github.com/nickjj/dotfiles. I'm using it on 2 systems at the moment.
Basically, just use it to identify artworks from a photo, then return a pre-generated AI-Audio for that artwork based on the data on their site. I put up a basic live version on my site for The National Gallery in London: https://victorsantiago.me/audioguide/national_gallery.html
(If you want to try it, you can open it on your phone and take a picture of some painting on the National Gallery Website, e.g.: https://www.nationalgallery.org.uk/paintings/vincent-van-gog... )
It was pretty fun to try it out and play with different LLMs and TTS models to generate the output. Might make a it a proper web app some time soon!
Use Case: Assumption: You have access to your friends visitor parking login in Amsterdam.
You are going to a restaurant/or visiting a place near their parking zone(geo fenced polygon). You want to pinpoint a point in map and drive to that point. Being 100% sure that you can park at that point. Automatically pick a meter near there spot and park almost instantaneously. Then this app is for you :D
Update: Recently updated for this to work with new APIs and in the process updated the UI as well(slightly modern)
The real constraint on planes these days is elbow room. That got me wondering: could a small, handheld keyboard and trackpad setup make in-flight work tolerable?
After failing to find anything compelling on Amazon, I realized something obvious: my iPhone already has a great keyboard and touch experience. So why not use it directly?
I looked for existing apps, but the top options felt dated and required both devices to be on the same Wi-Fi network—which isn’t always possible (or desirable when paying ridiculous prices for airplane Wi-Fi).
So over the last few days I’ve been tinkering with a project I call Magic Input. It turns your iPhone into a wireless keyboard and trackpad for your Mac.
How it works (high level):
• The iOS app discovers nearby Macs using MultipeerConnectivity
• Keyboard input and touch gestures are streamed directly to macOS
• The macOS app injects events at the system level (requires Accessibility permissions)
• No shared Wi-Fi network required; devices connect peer-to-peer
It’s very early, but already supports basic typing and cursor control—especially useful in cramped spaces like planes.
Here’s the TestFlight link for the brave. You’ll need to install the same app on both macOS and your iPhone:
https://testflight.apple.com/join/T1PgucDs
Happy to answer questions or dig into implementation details if anyone’s curious.
Here's the elevator pitch for the framework:
Its built around 3 key ideas I've dealt with inside the agent ecosystem
1. Agents become far more capable when they have access to a CLI and can create or reuse scripts, instead of relying solely on MCP.
2. Multi-agent setups are often overvalued as “expert personas” but they’re incredibly effective for managing context, A2A is the future.
3. Agents are useful for more than just writing code. They should be easy for non-engineers to create and capable of providing value in many domains beyond software development.
If that sounds interesting take a look! https://github.com/brycewcole/capsule-agents
Stack overview:
Networking — Rust-powered WebSocket + RUDP with fast ring buffers — https://github.com/bugthesystem/Kaos
Game Server Framework — Rust-powered Matchmaking, Lua runtime with hot-reload, leaderboards, storage, chat, rooms, social features (friends, etc.)
SDKs — Rust, TypeScript/JavaScript, Unity, Godot, Defold
Game Studio — AI-assisted builder (similar to Lovable.dev), Monaco-based Lua/HTML IDE, live preview with hot-reload for both backend and frontend, one-click deploy
Early results are promising. Would love feedback from folks who've worked on similar problems or have thoughts on the approach.
Planning to spin up a Discord and public playground in a week or two for anyone interested in early access.
– Sia
An application to read Rigveda Samhita, and potentially other old indic texts in future. Inspired by University of Cologne's vedaweb.
I’m getting ready for the official launch, but while I work on expanding the library beyond the current collection of Aesop’s Fables, I’m opening up a beta.
If you’re on iOS and wouldn't mind helping me test it out, I’d love it if you had a look: https://verva.ai/en/
I suspect has to do with having agentic coding assist for folks who would otherwise not have the means to develop a game, but now do.
As an engineer, it is hard to know when you can put your head down and work when you have multiple meetings. This helps me cure that overhead of planning my morning.
- https://din.tetri.net – a personal finance manager to log income and expenses and see everything in simple charts, making it easier to understand where the money goes and to follow each month more clearly.
- https://rax.tetri.net – a group expense splitter for trips, dinners and events, where you can add shared expenses and it shows exactly how much each participant owes to each person, avoiding manual settlements.
- https://vex.tetri.net – a vehicle expense tracker to keep track of fuel, maintenance, taxes and other car-related costs, giving a more realistic view of the total cost of owning a vehicle.[2]
Right now I’m still exploring which pricing model makes more sense (monthly or yearly subscription, free plan with limits, etc.), so there are no prices defined yet.
I’d really appreciate feedback from the community on:
- Whether each product’s value proposition is clear.
- Which features you consider essential in each case.
- Pricing models that you think would work best for these niches.
The core problem: there are now 45+ AI models for visual content (Kling, Sora, Veo, FLUX, Imagen, GPT-4o Image, etc.) and they each have different strengths. We aggregate them into one canvas where you can:
Generate images across 21 models and compare outputs side-by-side Convert images to video with 19 video models (Kling 2.6, Veo 3.1, Sora 2, Seedance) Use Storyboard mode to plan multi-scene videos shot-by-shot with consistent characters/assets Upscale, remove backgrounds, inpaint/outpaint — all in one workflow Main use case is e-commerce product content: lifestyle shots, 360° product reveals, unboxing videos. Upload a product image, generate variations across models, animate the best ones, export in platform-ready formats.
You put queries in a file `app/queries/foo.sql.erb`. Casting works and can be customised. There's ERB-support for parameterisation, with helpers like order_by and paginate.
The gem can parse and rewrite CTEs which allows for 1) rewriting the query such that basic queries (first, take, count etc.) are performant (Rails does `resultset.count`). And 2) given your query contains CTEs, it allows to write tests per CTE.
The goal is basically what's listed on the Elm website. No runtime errors, fearless refactoring, etc. But also improved accessibility for developers who want to create a native "Linux" app. IMO Linux should be so accessible and so amenable to rapid prototyping that it is the default choice when building a new GUI app.
What it does:
File Explorer: Browse your project files and open them directly in your terminal/editor
Git Operations: Checkout and run any branch/tag directly on a device
Gradle Tasks: Run assembleDebug, installDebug, or custom tasks with streaming output
Logcat Viewer: Live filtering by level, tag, or search text
Activity Tracker: Monitor the current foreground activity
ADB Tools: Clear data, uninstall, manage permissions, input text, take screenshots
Visual Testing: Screenshot comparison tool to catch UI regressions
Terminal Integration: Open files in iTerm2, Terminal, Warp, etc.
https://saltserv.gumroad.com/l/adbremote
Feedback welcome—especially if you find yourself avoiding Android Studio for quick device tasks.
PX is a daily developer tool that helps backend engineers go from working code on a laptop to deployed code in a freshly-built cloud cluster -- all within seconds.
In December, I wrote up a launch blog post:
https://amontalenti.com/2025/12/11/px-launch-overview
We also launched the PX website, https://px.app/, and we wrote up a basic developer quick start guide @ https://px.app/docs/quick-start.html
Prior to PX, I was the founding CTO of Parse.ly, a real-time web analytics startup that grew to be installed on 12,000+ high-traffic sites and had terabytes of daily analytics data flowing through it. PX stems from my experience as a startup CTO who eventually ran large distributed systems on AWS and GCP.
PX is cloud independent, programming language agnostic, and open source friendly. PX is, in short, the backend development tool that I always wished my team could have. We're having a blast building it and we're excited to give back some power to backend developers so they can wield cloud hardware resources with open source tech, rather than locking in to proprietary cloud APIs.
The current version of the CLI is focused on one-off (or batch) workloads on GCP, but on the immediate roadmap: cron-style scheduled jobs; a v1 of our monitoring/debugging/admin dashboard (already looking good in internal builds!); and, formal support for the other 3 clouds (that is: AWS, DigitalOcean, Azure). We also have a lot more documentation to write and a lot more examples to post, but you have to start somewhere! The launch blog post covers some of the history and inspiration.
- connect your github and deploy from web/cli/vs code/cursor
- static sites automatically detected and deployed on a CDN
- logging
- encrypted env vars
- built in monitoring
- custom domains
- auto deployments from new branches or commits
- multi-branch deployments
Hoping to launch in the next 2 weeks, we're busy deciding on a name and domain.
For the moment, I'm trying to design a shell for my car key fob. I keep leaning on the buttons and setting the car alarm off; I also never need to touch the buttons on the fob, because the car has a button for me to lock / unlock it as long as the fob is in my pocket.
I'm sure it's a trivial project for someone who has experience with CAD and 3d printing; but for me, where my hobbies in my adult life involved either programming or woodworking, it's a new adventure.
---
I tried using Microcad, which was on HN about a month ago. I really like it, but it's still too early for me to design what I need to design.
The next step is going to explore building a local LLM into the application itself to then skip over the entire Google part. I want to implement some question/answer features that I THINK could be solved with a local LLM integration. But for now it's just a quick app to help you find coffee.
iOS - https://apps.apple.com/us/app/nyc-coffee-map/id6755573635
Android - https://play.google.com/store/apps/details?id=com.parkasoftw...
I've spent a lot of time combing through Rightmove and Zoopla getting an idea of housing costs in different areas, now I can do it in seconds.
Please give it a test run! I'm looking for constructive criticism on how to improve the detection thresholds and user experience.
The goal is to build an instant, feature rich but simple system.
We’re taking a local-first approach with electric as a sync engine. All interactions against our own apis are instant with optimistic updates. Making the app feel more like a native app rather than the typical web app.
Taking this approach has lead to some interesting challenges, both in technical and product areas. Some features requires third party roundtrips (summaries, labelling, embedding etc..) which breaks the "instant" system. We’re experimenting with ways to design those flows so the system still feels fast. Currently we only have one "loading" request in the entire system.
Having all data available on the client has also lead to some neat benefits. No need to compose or build your typical GET endpoints in the API. You just query the client DB directly!
If you have experience with any of: go, ts, react, local-first or early stage companies and want to be a part of the team building this, feel free to reach out (info in bio)
super cool idea btw
I've spent that last year working with news organizations to improve their online presence and a lot of my job has ended up being translating their ideas into wordpress clicks using Claude or ChatGPT. They can't take advantage of products like Lovable because the resulting code won't work with the CMS that their staff is used to.
Email me if you'd like free access! seamus@presspass.ai
What I am most proud of is that I got the solution in the corse of apporx 1 week working on this!
Started as a side project but has become my full time focus since leaving a FAANG job ~6months ago
So far we've added:
- Code typing practice with any language: https://typequicker.com/code-typing-practice - SmartPractice: analyzes your history stats to find weak areas and generates exercises for them: https://typequicker.com/app/text - TargetPractice: lets you interact with any of your stats; for example, clicking on a certain bigram that you typed slowly will create a natural practice text that targets that two character sequence - TypeAnything: let you create a typing exercise about anything; AI for typing pretty much - Advanced stas: we measure every character, 2,3 charcter sequences (delay to click in ms), every word, and even breakdown speed/accuracy per finger - Real-time hand/finger indicators: show you exactly where to place your fingers to type based on standard touch typing practices - Keyboards supported: QWERTY, QWERTZ (German keyboard) ISO, British ISO (adding Dvorak and Colemak soo).
Command languages are underrated, and being able to add a Bash-like REPL for agents and users alike is something I want to see more of.
Feather solves the "rest of the Owl" problem: agents/users get loops, conditionals, online help for free from Feather, as the embedder you only need to add application-specific commands.
Feather itself has no GC (uses the host's), no own data structures (uses the host's, e.g. JavaScript arrays for lists in the js version), and no I/O.
The core is implemented in stdlib-less C
I have a version 1.0, and now working on how to sell and market in a crowded product space!
The philosophy behind it is: instead of providing a bunch of tools to the LLM, you simply provide a single tool: run_python(). The Agent just generates code to do whatever it needs, to inspect local files, to carry edits, to run commands.
https://github.com/flipbit03/caducode
It worked surprisingly well, even with a very small 30b local model.
The goal was to build something as fast as a native app but with the convenience of a web link. Some of the technical bits:
* Instant Streaming: Recipients can start downloading a file the moment the first chunk leaves the sender's computer. No waiting for the full upload to finish.
* Bespoke Chunking Protocol: To handle "unlimited" file counts (tested into the millions), we group small files into larger chunks and split massive files down, so many small files and a few large files can get transferred at similar speeds.
* Auto-Resume: Both uploads and downloads automatically resume from the exact byte where they left off, even if you switch networks or close/reopen your laptop.
* E2EE via Web Crypto API: Everything is AES-GCM-256 encrypted. The secret key stays in the URL fragment (after the #), so it never gets sent to our servers.
* Zero-Knowledge Auth: We use the OPAQUE protocol for logins, meaning we can authenticate users and store their encrypted Data Encryption Keys (DEKs) without ever seeing their password or having the ability to decrypt their files.
* Passkey + PRF: We support the WebAuthn PRF extension. If your passkey supports it (like iCloud Keychain or YubiKeys), you can decrypt your account's metadata without the password.
* On-the-fly Zipping: When a recipient selects multiple files, the browser decrypts and zips them locally in real-time as they stream from our server.
* Performance: Optimized to saturate gigabit connections (up to 250 MB/s) by maintaining persistent streams and minimizing protocol overhead.
Everything is hosted in Germany (EU) for GDPR compliance. I'd love to hear any feedback on the streaming architecture or the OPAQUE implementation!
As I've grown older, I've found myself more interested in people and their stories and motivations - especially as I know a bunch of people who are technically skilled, but feel unable or unworthy to create instead of consuming. So it's inspiring to hear really great creators talk about those same burdens and how they overcome them.
If this sparks your interest shoot me an email at bobbie @ (site above in comment)
The instacart button is kind of crazy. I'll casually watch videos and when something looks good, I can have the ingredients to make it delivered within the hour.
My cousin has a large collection of CDs and the player is being built around his listening habits (listening to disk from start to finsh at home, individual tracks when on the go).
Before this, I dumped all my personal notes, ideas, journal entries, etc into Apple Notes -- I liked the simplicity. But as LLMs get smarter, I wanted to have them read my notes and proactively help move things forward.
Right now most of the early users use it as a journal. On a whim I shipped a feature a few months ago where the LLM writes you a "letter" every Sunday based on the past week's notes. That's ended up being a really popular feature.
https://github.com/iszak/jpeg2000
I wanted to learn about wavelet transforms. I also wanted to use Rust in a non-trivial capacity. Since jpeg2000 uses discrete wavelet transforms[1] and doesn't have a Rust implementation it caught my interest.
[0]: https://en.wikipedia.org/wiki/JPEG_2000
[1]: https://en.wikipedia.org/wiki/Discrete_wavelet_transform
I noticed that when trying to work with Claude Code I burned my token allocation really quickly and with parallel agents, no better. But when I looked at the other repos things felt really complex, for me at least.
I call it "contextd" and imagine it kind of like a context daemon you can always query in your repository for implementation plans, specs or code, etc. I always liked that git lived in your repo in the .git file, so I wanted this to work like that too but without 3rd party hosted databases or multi-repo pollution.
It's written in python at the moment, uses some local models and lanceDB but I might have it re-written in rust for speed :)
This is an HID ReMapping device, things like converting keyboard input to joystick input, or adding macros to specific inputs.
The usage is:
- It has 1 USB in and 1 USB out.
- In programming mode it's a USB drive you can put LUA scripts on (or pull out log files).
- In run mode you can select a LUA script to run.
- Scripts can read incoming mouse/joystick/keyboard data and generate outgoing mouse/joystick/keyboard data or even just log events onto a CSV file.
The code is functional, I still want to polish it and add fun animations and stuff, but for now it works as described above.
This month has been mostly mired in a PCB rework and assembly in the US. Assembly with JLCPCB is too expensive for now, so I have been spending the last few weeks sourcing components in an attempt to keep costs down. I changed the PCB a bunch as well to make assembly and case design easier
While I don't really plan on selling this, I wanted to challenge myself and make it sell-able by keeping costs really low. This has been the real challenge and time sink recently.
It is fully open source, once complete I will export a parts list and Gerber files so people who wanna make one can just use those files.
Unfortunately, USB is hot garbage imo. The USB-IF, really screwed up USB, especially with all the power negotiation bullshit that has to happen now. Thanks to their ever changing USB3.x super premium plus ultra megaspeed crap that they keep doing, i ended up giving up on a multi-year usb project. Because of their changes, i would have had to change my entire build, and have to mitm usb devices, which i explicitly built my device to avoid. And then theres the USB-C cable, and the stupid process of negotiation and the power cable active crap, just... uhg, i hate usb after going down that rabbit hole.
joking aside, I'm working on a mobile app side project for a client that involves camera vision/machine learning in a heavy trucking maintenance context. Very low-ego, blue collar stuff, with a blend of modern tech. That and game dev in Godot engine has brought back my passion for coding greatly that I had lost over the last several years.
http://andeplane.github.io/ai-data-analytics
All vibe coded ofc!
It's been fun to use it as I bring it to life.
A Django backend and flutter front end makes for a pretty powerful and adaptable pair.
I printed the main pedal frame and base with PETG, and I found some M5 hardware on amazon for pretty cheap. The pedals use a hall effect sensor to measure the proximity of the pedal to the frame base. They're wired to an esp32 ADC ports and I wrote a simple USB HID device with tinyusb and esp-idf which mounts as a generic game controller - good enough for my case. I saw some designs for a load cell brake pedal, but I wanted to do it as cheaply as possible first, so I found a stiff spring. Big inspiration was cncdan's design.
They feel great - I borrowed some old logitech pedals to compare the feel and these are much better! I think all in I spent ~$40 USD for the raw materials (spring, hardware, filament, hall sensors + wire, esp32) and a weekend of time.
Trip Replay (https://tripreplay.app) - A client-side travel map animator where I successfully got the AI to implement complex D3 projections and WebCodecs logic.
Krypto Markets (https://krypto.markets) – A crypto dashboard built purely in "Agent Mode" to test how fast I could ship a data-heavy UI.
Gez.la (https://gez.la) – My old COVID-era open source virtual tour database project that I used agents to fully refactor and modernize from a legacy stack.
The idea is to avoid the cancel / resubscribe dance and make things feel fairer for people with tighter budgets.
Not sure if this is naive or practical — has anyone tried something similar?
Managed to make long dictations even >10mins appear in < 2seconds by pushing what is possible with current STT models.
All processing done locally with 0 network calls.
I recently built a small experimental web app where users engage in a symbolic ritual to reflect on their digital habits. It's inspired by Eastern philosophy and aims to help people think about their time and intentions—no ads, no tracking, just reflection.
If anyone is curious, it's here: https://stillmarkapp.com
I am not even in school anymore, but I always dreamed of something like this when I was.
Listen to the Description. https://commercialherschel.substack.com/p/episode-000-promo-...
So far it's going well, I really need some success for better tech and comms.
Native macOS and iOS apps with OpenRouter BYOK. Same quality as in proprietary products for 1-3$ per month instead of 35$.
After the logic of capturing and double buffering the CC's output resulted in a decrease of flicker of about 97.5%, I created a FIPA ACL messaging MCP bridge for all three CLI tools and wrapped it in an IRC-like chat interface. Now all three tools can communicate with each other and this works surprisingly well if you give them roles or parts of tasks.
It's all local to the terminal interface, no remote servers, no API keys, just one wrapper for local terminal multiplexing and inter agent communication.
I think of it as recursive transfer of human expertise into skills.
I know this is vague and probably raises more questions than it answers. I will share more once it reaches a semi autonomous working state.
The site offers a mix of nostalgic titles and new casual games that people can play instantly in any browser without downloads, whether on school/work networks or on personal devices. I’ve been focusing on polish, responsiveness, and a fun vibe, and I’m interested in seeing how players engage with different genres (from puzzle to platformers) directly in the browser.
Would love to hear feedback on the idea and potential features people want to see next.
I got to a pretty stable scanning flow using Gemini 3 flash.
I am now working on a game, written in Java.
I'm a 'vibe coder' returning to the IDE after 10 years. I spent the last 60 days managing a multi-agent stack (Codex + Antigravity) to get the perceptual color math right for P3 displays. The agents were great for the UI, but I had to manually step in when they started 'hallucinating' HEX values instead of wide-gamut OKLCH. It’s a niche tool for the Tailwind v4 ecosystem.
It’s called Zadan (https://zadan.app), named after the Slavic word zadaniye (“assignment”). The goal is to keep it free and focused on getting tasks done, with only a few practical extras like file or image attachments for context which I am working on now.
I’ve only spent a few days on it, but it already feels solid and was even helpful for planning my wedding! Feedback welcome.
Fast Classifieds (https://travisbumgarner.dev/marketing/classifieds) A lot of the jobs I'm looking for are on smaller company job boards only. It takes me about an hour or two each time I want to browse all the sites I have bookmarked. I made a little app that automates the process and reduces the time down to about 5 minutes to look through the new posts.
Always wanted to create some fast paced game! Finally got to start it!
I will definitely evolve it over time.
All done in my spare time with help of agentic coding!
Here is a quote repository pet project that I'm having fun with: https://crucialquote.com/
And a writing prompt generator from years ago that I resurrected and still love: https://writecomfy.com/
Thanks for checking it out.
because I was annoyed by how managers do 1:1s or yearly reviews
I always had the same problem with journaling. My best thoughts always came to me at inconvenient times like when I was on the subway or with friends.
So about a year and a half ago I hacked together a journaling solution for myself. I set up a Twilio phone number, a server to receive the messages, logged the message to a txt file, and committed it to a GitHub repo. It was extremely simple, but it worked surprisingly well for me. The friction was low enough that I actually used it every day... and I had perfectly timestamped journal entries with full version control.
Over time I wanted photos and voice notes, plus a better way to visualize everything. So I rebuilt this it into something more robust that other people can actually use.
It started while I was studying for JLPT N4 — I wanted adaptive furigana that hides readings for kanji I already know, so I could focus on real text without constant lookups. That shaped the core architecture: level-based furigana, offline dictionary (JMDict), and custom tokenization logic with IPADic for accuracy.
Some interesting challenges have been furigana edge cases, Safari text paste quirks, and balancing offline performance with accuracy.
Yomu is live on the App Store now, and I’m writing about the problems that led me to build it on my blog: https://blog.kulman.sk/japanese-reading-problem/.
# -- Not related to the thread, but if anyone is looking to hire a developer or knows of opportunities, I was recently let go and am actively searching. Any leads or feedback would be greatly appreciated.
# Reference CV : https://docs.google.com/document/d/13yYXN_QM-JmGewx9DWC765pp...
- https://hn-games.marcolabarile.me/
Please let me know if this looks useful to you!
It managed to clear some pentesting benchmarks recently, so I'm very excited to put it through the fire against some real websites soon.
Please let me know if you have any recommendations for cool potential targets.
Get a feel for it here: https://app.pitchwise.se/v/founder-pack
Recently shipped some small improvements (cache layer, email i18n), but the hardest part so far has been SEO.
This is my first time taking SEO seriously, and it’s been humbling — DR is still ~12 and trending down.
One thing I’ve found useful: comparing backlinks from Ahrefs’ free checker and Bing Webmaster Tools.
Still early, but shipping and learning steadily.
It bundles and executes a static HTML pipeline in a node vm executing within the vite process, then outputs that to a folder with an index.html for each of the .md and .tsx in your seo source folder. The result does not rely on javascript or fetching content to render, it's static HTML and there's no rehydration step.
This allows it to reuse your main app layout, headers, footers, etc.
I built it for a one page tool where I wanted crawlable links in the footer served from a standard nginx setup.
I’m building a platform that helps sellers manage, improve, and scale their marketplace operations from a single place, with a strong focus on time savings and decision clarity.
Current MVP: – Product listing scorecard and analysis – AI-generated product images – Product description generation (final testing phase)
Early stage, actively iterating with real sellers and shaping the roadmap based on usage and feedback.
The platform I am building allows users to launch Spark on Kubernetes in their own AWS account without adding any markup costs to the CPU/Memory on EC2 instances. For example, AWS EMR offering adds a 25% markup cost on top of the EC2 instance pricing. Databricks markup is even higher ranging anywhere from 30% to 100% markup.
To learn more, check out the docs for the project at https://docs.orchestera.com/
A Ruby interpreter implemented in Clojure with support for GraalVM native image compilation (Very initial stage) but actually run some ruby code already (check examples directory)
Just a playground to play with Clojure and Interpreters
A Ruby interpreter implemented in Clojure with support for GraalVM native image compilation (very initial stage, but can run some ruby code already, check out the examples directory)
Just a playground to play with Clojure and Interpreters
A recipe app you will actually want to use. No bloat, no ads, very minimalistic but everything works well and bugs gets fixed.
Why? Because most recipe apps and websites are frankly painful to use. I am trying to create the absolute best cooking/recipe experience possible. Something that just works.
I've been using it myself for a few weeks and it's helped me actually read newsletters instead of ignoring them in my inbox.
I had the need for such a service back when I was working on ImprovMX.com. The checking system I had implemented back then wasn't efficient and had a few false positive.
At the time, I thought the market for such a need was very low so I didn't bother, but now that I'm working on https://getfernand.com, I realize I also need it. And with the advent of more and more products thanks to AI and the enthusiasm around tech, the market is growing, so it might be worth the effort to release it as a side project.
I took it as a challenge, on two simultaneous front: Technical, and AI.
On the technical side, the aim is to have a highly efficient script able to process millions of DNS record at the lowest time possible. I had to rewrite the script a few times and see how I could optimize it, but my tests lands me at around 500 QPS for now.
I'm testing against Cloudflare's DNS, and I know they have a limit at 1500 QPS, so I can't go much higher (per server), but once I reach that limit, I'll grow horizontally.
For the AI side, my approach was as follow: Use a standard web framework (Sanic in my case), build the API with a proper database structure, and once I'm satisfied with it, ask Claude code to write the OpenAPISpec file.
Once I had it, I used Mintlify to write the doc for this, and the help of Claude to have nicely written guides (rate limit, authentication, etc). Then, I asked Claude to generate the landing page, and the dashboard system.
So, except for the API and the core of the service, the other parts was mostly written in AI (I fixed some issues, improved the code Claude gave me, but that's all).
Less than three weeks after, the service is almost ready to go live. I still need to clean the code in the Dashboard part, and finalize the touches on the API server, but it can be used already (and it's used by us at Fernand).
Feel free to share your comments :)
lucasfdacunha•3w ago
Always on the look for new sources to be added. If you have a blog or read any that you want to recommend. Just let me know.
martianlantern•3w ago
lucasfdacunha•3w ago
My project is just for reading individual blog posts from fellow developers. There is also a lot of filtering capabilities as well.