Main focus is https://wheretodrink.beer, collecting and cataloging craft beer venues from around the world. No ambition of being exhaustive, but aiming for a curated and substantial list. After the last thread, a bunch of people added their suggestions, thanks! It helped add interesting new venues from cities I hadn’t covered yet.
I’m very slowly layering on features, and have a few spin-off ideas I’ll keep brewing on for later. The hardest problem thus far has been attempting to automate popularity rankings and automatic removal of defunct venues without breaching a bunch of ToS.
Also made https://drnk.beer, a small side project offering beer-related linkpages and @handles for Bluesky (AT Protocol). It's been on the backburner, but still very much live.
Probably looking for another small project for the next few months to focus on something else for a while. Always curious to see what others are building and doing. Thanks for sharing!
Think around 5% is from visitors, 10-15% from my own experience and the rest just procrastination research.
Started with the cities I know well, and after that adding on countries or cities close by, main focus has been Europe. At one point I tried to use ratebeer's dataset as a starting point, before they closed down, but it was so horribly outdated and irrelevant that it was more work than sourcing manually.
So I basically look for existing blog-ish top-lists for a city, then try to verify the information with search, social media, untappd, etc. Looking for social proof that the venue is operational and relevant.
To keep it updated I have some very rudimentary monthly tasks to ping a venue's website and notify me on things that signal they're closed. I also email myself a list of 10 random venues with all relevant links daily, so I can do a manual 5 min alive check.
The idea came from noticing how most people manage money day to day: checking their balance, adjusting by feel, trying not to drift. There are tons of tools for planning or categorising, but not much that fits that kind of improvised pacing.
Still early, but trying to shape it around those habits – to make something simple and steady, that supports how people already do things.
I built it because I was blown away with what the latest image generation models can do and found that interior design is one area where it could already provide significant value for people. I’ve already used it in just about every room in my house to help me decide on:
- which paint color I should use
- how I should arrange my furniture
- what color theme I should be using to match the design I’ve gone with
- general inspiration on decor
It’s free to download to try with sample imagery. Unfortunately due to the cost of image generation, you won't be able to upload your own photos in the free version (yet). But I’m constantly improving the app and would really love some feedback.
https://apps.apple.com/us/app/roomai-restyle-your-home/id674...
Idea is to be the uptime monitoring + status page solution software teams choose. Next big project I'm looking at is making a terraform provider for uptime checks, so setting up alerts for your new microservice becomes seamless.
Still years away from employing me full time, but we're getting there.
Just noticed your website checker might have a bug: https://onlineornot.com/website-down-checker?requestId=Kfd51...
It's pretty simple so far. I'm focused ok getting the basics right and robust, such that I can start playing around without disrupting the real network. I don't have any specific goals, I'm just sort of messing about.
One question that dropped into my lap today was who just announced 2k new Infohashes over the span of 10 minutes. That'll keep me busy for a while.
It's been a lot of fun but Meta HorizonOS (or whatever) is such a poorer dev experience... Anyway I'm now trying to rebuild the live environment mesh reconstruction feature that doesn't exist while encountering first limitations with Godot... Hopefully it will be ready in a couple months!
If this whole thing got you curious you can watch a technical talk I made about this game at the Letsvision conference in Shanghai, CN. https://www.youtube.com/watch?v=CYFH2hiRNqk
...and if social media doesn't somehow destroy your soul, you can follow me here: https://x.com/sxpstudio
I'd like to volunteer for a software project but I struggle to find good ways of locating a project that interests me.
to find ideas, start with the software you are using. is there any that you like using a lot where you feel that something could be improved? you can also look at websites that you are using, see if any of them are volunteer based.
if that doesn't lead to anything, look at your skills, or skills you'd like to learn. then look for projects based on that.
and finally just browse issues of various projects, search for "help wanted" or "good first issue" or similar and simply try out fixing one such issue, then see if you like working with that project.
there also was an hn thread similar to this one some time ago where people posted projects that they need help with: https://news.ycombinator.com/item?id=42157556
i also have a project that i could use some help with, but the learning curve is a bit high (or rather the setup work you need to do to before you can start coding): https://news.ycombinator.com/item?id=42159045
What helped me get unstuck and get my creativity back up was setting myself constraints, like whatever I work on today, I'll ship it today, or let's try to make an intentionally useless bash script in 20 minutes.
May be coming up with a list for people like us in itself could be something.
How We Met – https://how-we-met.c47.studio/
Each day, I create a new 30-second episode based on the plot direction voted on by the audience the day before.
I'm trying to see how far the latest Video GenAI can go with narrative content, especially episodics. I'm also curious what community-driven narratives look like!
For the past week, I've been tinkering mostly with Runway, Midjourney, and Suno for the video content. My co-creator vibe coded the platform on Lovable.
It introduces quite a few changes. In my shipping apps, I'll probably be simply telling the OS not to use Liquid Glass (for now), but for my various test harni, I will need to adapt. Looks like a fair bit of work.
Haven't released properly yet - not sure if it's stable but oh well.
I don't like using my personal email to sign up for things. But there are definitely things that I do want to sign up for - newsletters, try out some services.
I know there are temporary email services, but I actually want to use these services. Of course there is Apple email that forwards to your real email.
But, I also don't want to flood my inbox.
Anyway, I wanted to receive these transactional emails in my personal Slack.
So, that's what Fro is for (https://fro.app)
- Sign up - get an email address - link to your Slack channel
And you can now catch up on those newsletters via Slack.
Error details invalid_team_for_non_distributed_app
After HashiCorp was acquired by IBM I decided to take time off from corporate life and build something for myself. For years I've also been a casual retail investor on the side.
Forums like /r/stocks and /r/wsb in the past have been useful resources for finding leads and interesting information. But meme-ification (among other factors) have substantially degraded sites like Reddit, to the point where interesting comments are much fewer and far in between. With TickerFeed I'm hoping to recapture what was lost - a platform where investors can discuss companies and all things stock market through meaningful long form content.
It's also a chance to build something with my dream stack - Go + HTMX + SQLite, and that's been fun :)
Bogleheads used to be place with serious folks but I haven't been there in a decade or more so no idea what it's like these days.
+1 on your tech stack
https://apps.apple.com/us/app/open-insure-self-insurance/id6...
Also I don't see how this solves anything, just because a pill "looks" like another doesn't mean it is that, it could still be anything.
Knock-offs tend to turn up later, be of inferior quality physically, and have worse reviews online and in the clubs / social circles.
The card maker has its own web site with the rules for playing all kinds of card games, and it's filterable by number of players, including many games for one person.
So we decided to build out our own filesystem adapter and recently deployed it. It's pretty exciting to have our own solution that does exactly what we need and appears significantly faster.
It makes us want to open source pgs.sh because it has fewer dependencies in order to deploy.
I recently by request[0] added a cohesive timeline view for hn's /bestcomments. The comments are grouped by story and presented in the order that they were added to the /bestcomments page. It's a great way to see popular comments on active topics. I'm going to add other frills like sorting and filtering, but this seems to be as good a time as any to get some of your thoughts!
You can check it out here: https://hcker.news/?view=bestcomments
[0] https://news.ycombinator.com/item?id=44076987 (thx adrianwaj)
But no hornworms or caterpillars this year. Very strange!
I’d estimate 30-40% of their S3 bill could be eliminated just by properly compacting and sorting the data. I took it as an opportunity to learn DuckDB, and decided to build a tool that does this. I’ll release it tomorrow or Tuesday as FOSS.
You're right, Kingly is the newest out of the bunch and the least satisfying to solve because of that. It's getting a big rewrite under the hood this week, so should be much more fun to play to make it more deducable and less random
It's grown over a dozen or so years and when I finally decide to compile into a book, everyone now uses AI and no longer read and learn from books but instead through LLMs.
> when I finally decide to compile into a book, everyone now uses AI
This is part of what discourages me from starting now, sadly. That, and having more concepts for actual Python projects than I know what to do with.
Not me, I read the shit out of documentation and also books like yours which distill knowledge from professionals down to a bunch of useful points. I have never not learned something (even if I knew and forgot it) from reading a good book about "Working with X".
Thanks for your hard work, and for giving it away to others gratis.
Edit: the string formatting cookbook has a ton of useful info that I always forget how to use, I'm going to bookmark your site by this page: https://mkaz.blog/working-with-python/string-formatting
I did a screenshare demo of it yesterday: https://www.youtube.com/watch?v=GQzDfrdf71Y
* OSINT (r00m101 just beat me to it by launching...)
* Research into recommendation algorithms, advertising placement algorithms, etc
* Marketing (ad libraries, detailed analysis of content given data not even exposed to the mobile app due to some interesting side channels, things like trend analysis, etc)
* Market research for products
* Sales teams can use it to find exact mentions of other products. Eg: selling crash reporting software? Look up your target accounts' brands and find examples of complaints.
Plus a few more with more imagination.
So I'm working on a site that allows user access to some of the read-only functions available here. Coming soon :tm:. Been really fun building it all in Rust, though :) If you're interested in anything here, email in profile.
My main question: why, do you like the UI? I honestly really hate the reddit app, I haven't seriously used it for browsing since I fixed up Libreddit into Redlib :)
I'd also just like to play around with different styles of frontend just as a way to hack on things.
What makes you special in this aspect? Seems you are small fish now, but if your niche project picks up steam. Nothing to stop them from cutting you off or forcing you to court/injunction and waste your personal resources.
I spent a couple months travelling.
Then I spent a couple months trying to use transformer-based models of sorts to detect short-lived inefficiencies in the stock market to try to create a passive income trading bot. I know short-term quant trading is super hard to be profitable, but Rentech did it, so I figured I'd throw a couple months at it.
Then I spent another couple months on AI for science, robotic lab automation, and trying to get AI to do AI research inside a Docker container.
Frankly, I'm astonished that it hadn't collapsed out from under me when I was shoveling snow off of it this past winter. Behind the ledger that tied the balcony to the house was a mess of pressure treated lumber scabbed into a cavity in the logs formed by rot, none of it well-fastened or fastened into truly sound wood.
This is something I’ve needed myself over the last few years as jobs become shorter and shorter lived. Keep on improving it as some kind of compulsion.
* a library for filesystem tree operations (and other trees, if you're clever enough swapping in components)
* a utility to identify and extract wheels from pip's cache (so that they can be dumped into other installers' caches, for example)
I also hope to return to bbbb soon, if only to make sure that it can build PAPER's wheels smoothly (and with a few other basic conveniences implemented).
Oh, and I wrote an article for LWN recently and have plans for a few more....
Also, every region has different ways of representing a “neighbourhood”, so I get to learn how to extract viable data from each city. Lots of map stuff, I’m genuinely enjoying it!
- https://uceed957a657be57d7d53af97504.previews.dropboxusercon...
It felt good when I was able to figure out how to generate all the neighbourhood data for any given city. A bunch of fun OSM data manipulation though.
If you meant the app that I wrote last year, it's here - https://apps.apple.com/us/app/mapcut/id6478268682. The idea is much simpler though, as I mentioned.
- It felt like what I wanted to achieve is pretty simple (GPS coordinates -> display all on the same map), so didn't want to subscribe for a monthly fee. I couldn't actually find an app that would dump all my HealthKit data directly onto the map, which was surprising.
- Last year when I wrote my app, I wanted to see how fast I can learn simple mobile development loop
- Now, I couldn't really find anything that divides the coverage areas into real-world neighbourhoods. So, think of West Village of NYC, or Yorkville in Toronto, or Yoyogi in Shibuya and etc. Back when I used to live in Vancouver, I would look at my own app, and kinda say in my head "aight, I've walked through every street in West End, Vancouver". Figured it would be cool to have a proper way of tracking it. So working on it currently.
- It's kinda fun to work on an app for my own needs
I'll take a look at the squadrats though! Looks pretty cool.
For example, you can scroll through 60 pictures from my window https://stacks.camera/u/ben/89n1HJNT
Most of the challenges are around handling images & rendering, but I've also been playing with Passkey-only authentication which I'm finding really interesting.
The other more recent is a web based CalDAV client for Todo items. I love the tasks.org mobile app and can't stand the Nextcloud Tasks UI so I'm making an alternative that'll be local first and simple but fast.
it now takes every other minute a webcam pic of me to see whats going on
Likely will do a prosumer SKU, will be faster and cheaper than the Mac Studio equivalent.
Our first devices were delivered to researchers in Feb for their clinical trail (we just provide the tech, it's their study).
We're prepping for pre-sale now as we finalize the last few manufacturing and design details.
It's free (https://github.com/welpo/ab-test-calculator), and it has no dependencies (vanilla JS + HTML + CSS).
Right now it only supports binary outcomes. Even with the current limitations, I feel it's way above many/most online calculators/planners.
Health insurance is one of the earliest, most important decisions immigrants make, and they often choose wrong. It can delay visa applications, cause coverage issues, or create expensive problems down the road.
Now they click a few buttons and get very specific recommendations explained in plain English. If they're confused, they can involve an independent insurance expert for free. The guy replies within an hour or two, and is cool with Whatsapp. The way I gather feedback from users, he's strongly incentivised to stay honest.
There is no AI involved, just good old-fashioned business logic. It means that the advice is sound, well-tested and verified by multiple competing experts.
It's such a far cry from either trusting whatever reddit or your employer tells you, or the slow back and forth of getting a quote from a (possibly dishonest) broker.
The second version[0] has been live for about a month, and the results are phenomenal. This third version vastly improves the quality of the advice, adding information about gap insurance for visa applicants, and making actual recommendations instead of listing all options.
It's a really fun project, even if the topic is boring. It's a great research, UX, copywriting, coding and business project. It's the product of a few months of hard work, and so far it seems to pay for itself.
[0] https://allaboutberlin.com/guides/german-health-insurance
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
In all seriousness, I think I have the same propensity to have a hundred unfinished projects and have a hard time finding motivation to complete them. The difference might be that I have this 'big' project called a 'game engine' that wraps them all up into some semblance of a cohesive whole. For example, projects that are incomplete, but mostly just good enough to be serviceable (sometimes barely):
1. Font rasterizer 2. Programming language 3. Imgui & layout engine 4. 3D renderer 5. Voxel editor
.. etc
Now, every one of those on their own is pretty boring and borderline useless .. there are (mostly) much better options out there for each in their specific domain. But, squash them all together and it's starting to become a useful thing.
It just happened that I enjoy working on engine tech and I picked a huge project I have no hope of ever finishing. Take from that what you will
"I hate to advocate drugs, alcohol, violence or insanity to anyone, but they've always worked for me. --Hunter S. Thompson
Username checks out
Toooootalllly. This project started out for me as a learning exercise, and for a long time an explicit non-goal of the project was to ship a game. It's just my own little land that I know every nook and cranny of for experimenting and, sharpening my tools, as it were. It's also the best way I've ever found.
I've always also had a side project or two in this domain but I've never managed to stick with one for more than 3-5 years.
Do you have any recommendation on voxel engine learning materials (e.g. books, courses, etc)
I'd recommend Handmade Hero for a more traditional resource on how to build a game engine. That's how I learned to program for real, and it worked great for me.
I'm also working on learning about building software with LLMs, specifically I am building a small personal project that will allow me to experiment with them using measurable hypotheses and theories, rather than just tweaking a prompt a bunch and guessing when it is working the best. I know others have done this, but I am building it from the ground up because I'm using it as a learning experience.
I plan to take my experimentation platform and build a small "personal agent" software package to run on my own computer, again building from scratch for my own learning process, that will do small things for me like researching something and writing a report. I don't expect anything too useful to come out of it, since I am using 1.7B/4B models on a MacBook Air M2 (later I might use my 3080 but that won't be much improvement), but it will be interesting to build the architectural stuff even if the agents are effectively just useless cycle-wasters.
We're off and running, making the world's best configurators for complex products. Our first clients love us. Our configurators implement some very personal ideas about front-end state management, and it's really a thrill to see it all working with real products, 3d rendering and zero latency.
* Expect/snapshot testing library for F# is now seeing prod use but could do with more features: https://github.com/Smaug123/WoofWare.Expect
* A deterministic .NET runtime (https://github.com/Smaug123/WoofWare.PawPrint); been steaming towards `Console.WriteLine("Hello, world!")` for months, but good lord is that method complicated
* My F# source generators (https://github.com/Smaug123/WoofWare.Myriad) contain among other things a rather janky Swagger 2.0 REST client generator, but I'm currently writing a fully-compliant OpenAPI 3.0 version; it takes a .json file determining the spec, and outputs an `IMyApiClient` (or whatever) with one method per endpoint.
* Next-gen F# source generator framework (https://github.com/Smaug123/WoofWare.Whippet) is currently on the back burner; Myriad has more warts than I would like, and I think it's possible to write something much more powerful.
I discovered that VSCode has a very nice solution so I pulled the core VSCode libraries and injected them into a Chrome extension using the dependency injection, ipc / rpc, eventing to bridge the gap between all of these isolated JS contexts and expose a single, strongly‐typed messaging API, my IPC/RPC shim sits on top of each of the native environments and communication mechanisms.
Yesterday, Microsoft released the source code for the Copilot chat. Apparently, since the basis of my Chrome extension is the same core libraries I can drop the VSCode chat UI into the side panel without much friction. Although, I might continue to use Microsoft's FluentUI chat currently implemented in the extension.
Because Copilot chat has a lot of code that runs in node in Electron, now I'm working in porting all the agent capabilities for browser automation from the Copilot chat including the code for intent, prompt creation, tools, disambiguation, chunking, embedding, ect. I'm 4 to 6 weeks away from having feature parity of Playwright for automation from a Chrome extension side panel that can do most of the inference using huggingface transformer.js locally. Nonetheless, heuristics exposed as tools such that if the intent is playing a video, all that is required is a tool that collects all the video tags and related elements with metadata. No need to use $10 in tokens to figure out which video element to play.
Yeah, I think I'm 4 to 6 weeks away from having a Copilot chat in a browser doing agent automation.
If you want to see where I'm at today, https://github.com/adam-s/doomberg-terminal.
When I did Grub the crawler back in the day, that's what I was shooting for!
If you want a jumpstart on the Playwright stuff: https://github.com/kordless/gnosis-wraith. Runs on Google Cloud Run. The UI is still in progress but you can test it here: https://wraith.nuts.services. Uses tokens to email for login.
The extension stuff is the way to go, IMHO! You can capture any page, even automatically.
It runs a 25-minute focus timer, then launches a 3-minute round of a multiplayer minigame (right now just multiplayer Minesweeper), followed by a 2-minute cooldown with a chatbox
A couple friends and I do this manually, we work on side projects, mute ourselves on Discord, and play random games during the break. This just puts it all in one place.
Only Minesweeper for now, but planning to add a voting screen and a few more simple multiplayer games.
I'm planning on doing a proper writeup/release of this soon, but here's the short version: https://gist.github.com/samscott89/e819dcd35e387f99eb7ede156...
- Uses lldb's Python scripting extensions to register commands, and handle memory access. Talks to the Rust process over TCP.
- Supports pretty printing for custom structs + types from standard library (including Vec + HashMap).
- Some simple expression handling, like field access, array indexing, and map lookups.
- Can locate + call methods from binary.
The code and a demo video can be found here: https://github.com/osintbuddy/osintbuddy (and on codeberg)
In few weeks releasing Chrome Extension for Youtube Transcript and Summary dashboard at https://www.infocaptor.com
Doing some minor fixes for https://wireframes.org - MockupTiger AI Wireframing
https://www.mercuryfalling.net
Apologies for US zip codes only and imperial units. I’ll for international postal codes and offer Celsius/metric units soon.
I've always had issues collecting business metrics like "signups per day" in observability tools, but using marketing type tools comes with it's own set of problems.
Now I am focusing on trying to get brands / businesses to create games on https://playcraft.fun for their marketing campaigns or events
if you want are interested, feel free to ping me!
- The encoder ring which works like an LED mouse, but in reverse: Fully reverse-engineered and on its own demo PCB
- The faceplate PCB, which does the actual control of the thermostat wires, has been laid out, but the first version missed a really-obvious problem involving the behavior on power-on with certain of the GPIO pins from the ESP32, so I've got rev 3 on order from the PCB manufacturer.
Nest Thermostats of the 1st and 2nd generation will no longer be supported by Google starting October 25, 2025. You will still be able to access temperature, mode, schedules, and settings directly on the thermostat – and existing schedules should continue to work uninterrupted. However, these thermostats will no longer receive software or security updates, will not have any Nest app or Home app controls, and Google will end support for other connected features like Home/Away Assist. It has been pretty-badly supported in Home Assistant for over a year anyway, missing important connected features.
Yet another example of why not to buy a product that needs to be tethered to its manufacturer to work. Good luck. I’d be willing to beta test (I’d have to check what rev mine is)
https://support.google.com/googlenest/answer/16233096?hl=en
> Upcoming end of support for Nest Learning Thermostats (1st and 2nd gen)
> Nest has announced the end of support for Nest Learning Thermostats (1st and 2nd gen). Your thermostat will no longer connect to or work in the Google Nest app or Google Home app starting on October 25, 2025.
Any ideas on how to source 2nd gen Nests? I just checked ebay and my local craigslist; nadda.
Do recyclers accept requests? Like pulling all the Nest units from the waste stream?
- "Wireless High Resolution Scrolling is Amazing": https://youtu.be/FSy9G6bNuKA
- "DIY haptic input knob: BLDC motor + round LCD": https://youtu.be/ip641WmY4pA
https://shop.m5stack.com/products/m5stack-dial-esp32-s3-smar...
> As a versatile embedded development board, M5Dial integrates the necessary features and sensors for various smart home control applications. It features a 1.28-inch round TFT touchscreen, a rotary encoder, an RFID detection module, an RTC circuit, a buzzer, and under-screen buttons, enabling users to easily implement a wide range of creative projects.
> The main controller of M5Dial is M5StampS3, a micro module based on the ESP32-S3 chip known for its high performance and low power consumption. It supports Wi-Fi, as well as various peripheral interfaces such as SPI, I2C, UART, ADC, and more. M5StampS3 also comes with 8MB of built-in Flash, providing sufficient storage space for users.
I've build a few HA-compatible systems using M5Stack products; mostly the Atom-S3 Lite connected to various sensors and lights.
My secret agenda is to explore how the "information supply chain" can be tracked across the data-processing stack all the way from the original audio through transcription, the processing pipeline, and UI. I'm using language models for multi-stage summarization and want to be able to follow the provenance of summaries all the way back to the transcripts and original audio.
Yes, you could try making one using Observable Plot (which is what I used for these): https://observablehq.com/plot/transforms/dodge
One of the slides in my presentation has the full prompt I used, in case that's useful. I ran it on chunks of the podcast transcript and then merged/deduplicated the results to get the data that's visualized here.
Just prototyping at the moment, but the goal is to allow users to not only share files (even big ones) but also forms, like Google forms, but encrypted and one time only (read once).
The use case I have in mind is allowing businesses to create GDPR forms (with private info, consent, etc), share unique urls with specific customers, and once the data is received by the business delete it from the server.
This could be useful to businesses that don't have a customer-facing portal, but have to deal with PII and the customer needs to consent and verify the data and what it's used for.
The data is encrypted client side (web crypto) and the password either shared in the url (in the hash fragment, also encrypted by a key stored on the server) or by other means (eg. could be the recipient's dob or id number or some other previously shared or known value).
Still trying to figure out the details, use cases, business value but the core backend is done so is the client-side crypto stuff. I managed to get chunked AES-GCM working so that it doesn't load the whole file in memory in order to encrypt it, it does that in chunks of let's say 2MB. Chrome also has chunked requests (in addition to responses) for sending the file to the server, but would probably need to come up with some other mechanism to get that working on other browsers (like send the chunks in multiple requests and append to a single file on the server, but that adds more complexity so I'm still working it out).
Hope to point something from experience But.
It never is “one time”, amount of ways people mess up is huge. Even just when you make submit and 5x confirmation there will be once a week a new user that happens to acknowledge 5x they filled in all they needed and know it will not be possible to fill in again but… they really need to fix that one thing they messed up when filling in.
Gonna wait until the LLM credits refresh next month to continue, but I'm very happy so far.
Elixir has been cool.
I'd previously tried to learn TLA+ a few times but always eventually lost interest and gave up. This approach was quick and easy. Disappointed that TLC can't really exhaustively check more than 8 steps; being O(n!), 9 steps would take months, even after all the symmetry optimizations. Maybe will look at TLAPS next.
Among other things, my team has implemented access-based sharing using web links, like Google Docs for real paper handwriting. And we've just launched Quin, our AI assistant for real paper handwriting. Super useful for getting help with math, language learning, looking up relevant facts, generating ideas, etc.
Working on AI/NLP stuff in low-resource languages. Working on some research ideas (hope to publish) and well as some practical tools for learning languages.
Think like ACE Studio, but I’m going much less for pitch performance and much more for clarity, expressiveness and human realism.
Very much at the data labeling phase but a little bit beyond the crude initial experiment phase.
I could create a portfolio page for my various projects - https://projects.learntosolveit.com/
https://github.com/dahlend/kete
Research grade orbit calculations for asteroids and comets (rust/python).
I began working on this when I worked at caltech on the Near Earth Object Surveyor telescope project. It was originally designed to predict the location of asteroids in images. I have moved to germany for a PhD. I am actively extending this code for my phd research (comet dust dynamics).
Its made to compute the entire asteroid catalog at once on a laptop. There is always a tradeoff between accuracy and speed, this is tuned to be <10km over a decade for basically the entire catalog, but giving up that small amount of accuracy gained a lot of speed.
Example, here is the close approach of Apophis in 2029:
https://dahlend.github.io/kete/auto_examples/plot_close_appr...
clinical summaries of dietary supplements
its good enough for me that ive started using it for my MCP masterclass videos / code export / transcript https://mcpmasterclass.com
In Fostrom, devices connect via our SDKs or standard protocols such as MQTT and HTTP, and send and receive structured, typed data, through pre-defined Packet Schemas. Each device gets its own sequential mailbox for messages. You can trigger webhooks or broadcast messages to other devices based on incoming data, powered by programmable actions (written in JS).
We entered Technical Preview recently. Since then, we've been working on:
- Major upgrades to Actions: making it easier to write action code, along with testing before deploying, and more docs on how to write good actions. Coming this week.
- We're in the process of releasing Device SDKs in multiple languages, including JS, Python, and Elixir soon. The SDKs are powered by an underlying lightweight Device Agent written in Rust.
- A new data explorer to view and analyze your fleet's datapoints, which will be available in a few weeks.
Happy to answer questions and appreciate any feedback.
A simplified DAW for mixing together tracks with different keys and tempos. It uses WebAssembly and emscripten under the hood for audio processing.
It’s a work-in-progress passion project of mine where I get to explore new technologies and hone in on my UX / Web a11y skill set.
https://Full.CX - still hums along in the background. Couple of customers. Just added MCP which has been amazing to use with AI coding agents. Updating the UI/UX to ShadCN to improve usability and make it easier for future changes replacing NextUI and Daisy.
https://Toolnames.com - no changes this month.
https://Risks.io - little bit of work on the new platform, yet to be released.
https://dalehurley.com - little facelift
Same thing in firefox and chrome on mac.
My most recent release is a camera app dedicated to RAW photography, which focuses on being fast & lightweight & technically precise - I wrote the website to be both a user’s manual and a crash course in photography concepts: https://bayercam.app
I’m working on my next app release, which I’m pretty excited about!
- Manages the entire range of personal (and maybe business) information/content: Documents, Media, Messages (email, instant, etc.), Contacts, Bookmarks, Calendar, etc.
- Tag based, so that the question of where to put and find content is quite a bit easier to answer. Think of a set of flat folders, on one or more devices, within which the files are stored with tags attached. However, there will be some improvements on the usual implementation of tag-based systems out in the wild. Since people find navigating/browsing files more natural than searching, virtual folders will be dynamically generated to provide guided navigation. Also, entire folders can also be treated as atomic and tagged/managed as one object, useful for repositories and projects. And, heuristics (and maybe AI) will be used to automatically tag files when they are imported into the tool, greatly reducing the tedium of adding tags the first time.
- Is file based, so that all information is ultimately physically stored as individual files. This allows information to be more easily managed on a physical level: moved around, backed up, exported/imported, searched, navigated, etc. without the restrictions imposed by the opaque islands of information we have now. So in addition to docs, each email/instant message, contact, scheduled task/event, bookmark, etc. would ultimately be stored as a file, unlocking all the things you can do with files.
- Has a local web-based UI launched from a local agent, so actual file content does not usually need to move across the network and stays local, and the tool is also easily multi-platform, with consistent UI irrespective of platform.
- Provides a cloud web UI as well, that communicates with content devices through the local agent, so that content stored across multiple devices can be managed in one central location, even without direct access to those devices, team/org features can be provided. However, file content still stays local, except when shared.
- Provides tools for exporting data as file from the data islands of various apps and service, and backing up as files to cloud storage services.
My vision is a situation where I am in charge of my own data irrespective of whatever device, app, or service I use, can ensure that it is always available and will not be lost, and that I can easily navigate and search through it all to find whatever I want, no matter how scattered and massive it is.
I welcome your thoughts. What would make this work for you? Would you mostly prefer a cloud UI or a local UI? Are there any technical or market gotchas I should be aware of?
[1] Here are some of my issues with personal information management affordances of current tech, which is driving me to work on a solution:
- Our data is too bound to device and vendor islands. Can't easily move my information across Apple/Google/Whatsapp, etc accounts. Can't easily merge and de-duplicate either. I almost always somehow lose data whenever I have to move to a new phone, etc.
- Hard to own your data on many services: Discord, Slack, etc. Can't easily export, search
- Hard to have a 360 overview and handle on all your data assets and query them in consistent manner
- Files as a unit of information storage and management is very ergonomic; we shouldn't allow that concept to be buried by vendors for their own gain.
Wasn’t planning on announcing it here but what the hell.
Also has > 800 automated feature tests, in app documentation, gone through security audits using tools like Zap, etc. I've built a lot of SaaS products over the years, and I'm building 6DollarCRM from the standpoint of having learned a lot of things the hard way. I'm currently working on data importers and browser extensions for easily adding new contacts.
Give it a spin and let me know what you think.
I've been meaning to wrap the project up for a while. Went down a rabbit hole trying to make the vim containers fault tolerant and scalable using kubernetes. But, after a friend told me I could do everything using cloudflare containers, I've been changing my backend to use that instead.
Surprisingly the blocker has been identifying notes from the microphone input. I assumed that'd have been a long-solved problem; just do an FFT and find the peaks of the spectrogram? But apparently that doesn't work well when there's harmonics and reverb and such, and you have to use AI models (google and spotify have some) to do it. And so far it still seems to fail if there are more than three notes played simultaneously.
Now I'm baffled how song identification can work, if even identifying notes is so unreliable! Maybe I'm doing something wrong.
I was thinking this would be a good project to learn AI stuff, but it seems like most of the work is better off being fully deterministic. Which, is maybe the best AI lesson there is. (Though I do still think there's opportunity to use AI in the translation of teacher's notes (e.g. "pay attention to the rest in measure 19") to a deterministic ruleset to monitor when practicing).
The idea is a fully weighted hammer action keyboard with nothing else, such as the Arturia KeyLab 88 MkII, and add to that tiny LED lights above each key. And have a tablet computer which has a tutor, and it shows the notes but also a guitar hero like display of the coming notes, where the LED lights shine for where to press, and correction for timing and heaviness of press, etc.
It's based on the assumption that the most common frequency difference in all pairs of spectrum peaks is the base frequency of the sound.
-For the FFT use the Gaussian window because then your peaks look like Gaussians - the logarithm of a Gaussian is a parabola, so you only need three samples around the peak to calculate the exact frequency.
-Gather all the peaks along with their amplitudes. Pair all combinations.
-Create a histogram of frequency differences in those pairs, weighted by the product of the amplitudes of the peaks.
When you recognise a frequency you can attenuate it via comb filter and run the algorithm again to find another one.
The goal is to be a full mobile IDE that lets you use Claude Code, Gemini CLI, and other agentic code editors.
Has mobile-native file browsing and git integration.
Tritium is the legal integrated drafting environment: an egui Rust project to bring the IDE to corporate lawyers.
[0] https://apps.apple.com/us/app/reflect-track-anything/id64638...
For my 3D audio project I need an affordable way to make plastic cases. I felt like injection molding services are way overpriced, so I decided to make the molds in-house. Turns out, CNC milling is overpriced, too. As are 5 axis CNC mills. So in the end, we built our own CNC machine.
And like these things always go, I found an EMI issue with my power supply and a USB compliance bug in the off-the-shelf stepper control board. But it all turned out OK in the end so we now have the first mold tool that was designed and machined fully in-house. And I learned so much about tool paths and drill bits. Plus it feels like now that everyone has experienced hands-on how stuff is milled, my team got a lot better at designing things for cheap manufacturing.
Why do you need to make so many molds?
It is easy to select multiple holes/pockets at once so if you iterate, you don't spend time redoing CAM! It does traveling salesman to solve for efficient paths which even the expensive packages don't get right. Calculates v-bit paths too.
I don’t want to auto compose messages or anything. I just want the computer to filter out things I don’t care about and tell me the answer to things without hunting around my inbox.
* https://trosko.hr (HR, Android/iOS app) - super-simple receipt/bill tracker (snap a photo of the receipt, reads it using Gemini, categorizes and stores locally - no accounts, no data gathering)
* https://github.com/senko/think (open source) - Python client library for LLMs (multiple providers, RAG, etc). I dislike the usual suspects (LangChain, LLamaIndex) but also don't want to tie myself to a specific provider, so chugging on my on lib for this.
I welcome feedback, just keep in mind that this is a work in progress, and I haven't even reviewed it for clarity and typos.
Since TP 3.0 does no optimisations, and looking at the progress so far (~25% decompiled), it seems like matching decompilation should be achievable.
If/when I get to 100%, I hope to make the process of annotating the result (Func13_var_2_2 is hardly an informative variable name) into a community project.
Although it has cult status in Israel for some reason.
Good luck!
It's similar with Turbo Pascal 3.0, but there's only one segment since it's a good old COM file. The compiler just copies its own first ~10000 bytes, comprising the standard library, and splices the compiled result to the end.
I can see how this makes transcompilation relatively straightforward, although the real mode 16-bit code is a bit unpleasant with all the segment stuff going on, so you might as well just decompile :D. It's very possible that similar instructions will be emitted in 3.0 and 4.0 for the same source input.
My program also has the stack checking calls everywhere before calling functions. I think that people using Pascal weren't worried about performance that much to begin with, so they didn't bother disabling it.
Tail calls between different VM functions are the next challenge. I'm going to somehow have it allocate the VM instance in the same space (if the frame size of the target is larger than the source, "alloca" the difference). The arguments have to be smuggled somehow while we are reinitializing the frame in-place.
I might have a prefix instruction called tail which immediately precedes a call, apply, gcall or gapply. The vm dispatch loop will terminate when it encounters tail similarly to the end instructions. The caller will notice that a tail instruction had been executed, and then precipitate into the tail call logic which will interpret the prefixed instruction in a special way. The calling instruction has to pull out the argument values from whatever registers it refers to. They have to survive the in-place execution somehow.
It’s like Anki but for speaking and an LLM grades your response.
From a dev perspective this area has a ton of super interesting algorithmic / math / data structure applications, and computational geometry has always been special to me. It's a lot of fun to work on.
If anyone here is interested in this as a user, I'd love for any feedback or comments, here or you can email me directly: tyler@vexlio.com.
Some pages the HN crowd might be interested in:
* https://vexlio.com/blog/making-diagrams-with-syntax-highligh... * https://vexlio.com/solutions/state-diagram-maker/ * https://vexlio.com/blog/speed-up-your-overleaf-workflow-fast...
Re: desktop version. The short answer is yes, probably, but I don't have a concrete timeline. I made tech and architecture choices from the beginning to make sure a cross-platform desktop version always remains possible. Frankly, the biggest obstacle for desktop is not the app itself, but distribution and figuring out a pricing model. The current solution for enterprise, business, and other interested people, is to self-host Vexlio, with separate licensing.
This is an example, https://terostechnology.github.io/terosHDLdoc/docs/guides/st...
But it only outputs an SVG, and there are no tools (AFAIK) that go from diagram to code, which should easy to setup.
So I'd consider extending this to both generate code and read in code and make these nice interactive diagrams.
Do you know if the FPGA and/or hardware communities use any type of formalism for design or documentation of state machines? One example of what I mean is is Harel statecharts - essentially a formalized type of nested state diagram.
Overall amazing though, will be using!
Diagram-as-code option?
ie. a language syntax from which a diagram can be generated?
I find a lot of the time taken up in doing diagrams is laying them out properly and then having to rearrange them when it grows beyond a certain size.
This may,however, be an old-man Visio user problem that's been better solved by more recent options...
Enterprise licensing? Donation based? Hosting fees with value-add mark up?
I started on a Zig one and nope'd right on out of that after a few hours of fighting the compiler.
I'm currently working on porting a bunch of my Rust mini-games to other languages. [3]
[0] https://github.com/Syn-Nine/odin-mini-games/tree/main/2d-gam...
[1] https://github.com/Syn-Nine/c3-mini-games/tree/main/2d-games...
[2] https://github.com/Syn-Nine/freebasic-mini-games/tree/main/2...
[3] https://github.com/Syn-Nine/rust-mini-games/tree/main/2d-gam...
It seems like everyone just wants to make the next big popular engine with Rust because it's "safe", and few people really want to make actual games.
I also felt like prototyping ideas was too slow because of all the frequent manual casting between types (very frequent in game code to mix lots of ints and floats, especially in procedural generation).
In the end... it just wasn't fun, and was hard to tune game-feel and mechanics because the ideation iteration loop was slow and painful.
Don't get me wrong, I love the language syntax and the concept. It's just really not enjoyable to write games in it for me...
So I made a proof of concept app on iOS that uses gmail API to send out newsletter emails. I wish I could just send prepopulated emails (with inline attachments and recipients) to iOS mail client instead of asking for gmail OAuth permissions, but it doesn't look possible.
Now I'm trying to create a polished app for alpha testing. Been exploring data persistence (Swift Data, Core Data, rxdb etc) and settled on Core Data. Architecture wise, I've settled on MVVM + Swift UI. At the moment I'm trying to figure out how to make mocks and XCode preview data geneeration ergonomic.
So far, I am pleasantly surprised at Swift and iOS development, but I still hate XCode.
I want to publish it on Google play, but I need testers. If anyone cares about budgeting, I'd love to get some feedback.
Here's the app link: https://play.google.com/apps/testing/dev.selfreliant.wasa_bu...
I don't think you can download it without being added to my testers list though. Send me your Gmail address if you're interested!
> Send me your Gmail address if you're interested!
Where? nleschov at gmail
Word of warning, Google is pretty dumb and even requires testers to pay for the app. It's going for 3$, but I can reimburse everyone who helps me test once the testing phase is finished
After a lot of grief trying to make Plex and jellyfish to work with my collection, and then some more with the community [1] I decided to make my own.
There's no selling point and clear pathway to monetize, as other solutions are way more mature and feature complete, but this is my own and serves my needs the best.
I've been working on it on and off for last 8 years or so, and it's been my personal benchmark for js ecosystem. The way it works, every now and then I come back to the project, look at the latest trends in js world and ask myself a simple question - what should I change in the codebase to make it online with the latest trends. And everytime it leads to full rewrite. Kind of funny, kind of sad.
In a nutshell I have a huge movie collection - basically I'm preparing for armageddon where all online streaming services cease to exist and I need both backend to fetch me detailed information about movies in the collection as well as frontend to help to decide what to watch tonight.
My next major endeavor will be trying to integrate RAG to take a bite at my holy grail - being able to ask a question like "get me a good gangster flick" and get reasonable recommendations.
[1] I think it was jellyfish where I was asking on their forums for how to manually create a collection, stating I'm a software engineer with 20+ exp and they kept telling me that I shouldn't touch the code... While having an online campaign asking for volunteers to contribute to the codebase.
For me that means Go + stdlib HTML templates (I want to try Gomponents at some point) to minimize dependencies. I copied the HTMX JS minified file into my source tree for some interactivity. I handwrote the CSS.
It looks very "barebones" (some would say ugly), but it's been solid as a rock. It's been a year and I haven't needed to update a thing!
I remember asking them some 10-15 years later to help me with a project and they were like "sure, we'll do it CakePHP". Initially I was like "you mean in Cobol?". But then I realized they were masters of that tech, it works, and there's no need to reinvent the wheel and learn some new trendy web framework that will be forgotten in a blink of an eye.
https://github.com/banagale/FileKitty
My most recent release includes signed .dmg installer on top of brew, and a local build option.
Although it should compile to any platform, I want to take advantage of the new Foundation Model sdk Apple announced at WWDC.
I also recently released something called slackprep, a CLI tool and Python library that wraps slackdump, converting Slack export data into LLM-groomed Markdown transcripts.
That includes labeling inline images organizing them for upload as LLM context.
https://github.com/banagale/slackprep
I see these and other utilities coming together to assist in assembly of deep context for system level design.
We initially built it for Shopify, but now it’s fully embeddable, supports headless implementations, and integrates with tools like Klaviyo, Zapier, n8n, and Snowflake. One thing we’re especially proud of is how fast and unobtrusive it is: polls load async, don’t block rendering, and are optimized for mobile and low-latency responses.
From a tech angle:
Frontend is all React, optionally SSR-safe.
Backend is Node.js + Postgres, with a heavy focus on queueing + caching for real-time response pipelines.
API-first design (public API just launched: apidocs.zigpoll.com).
We recently open-sourced our n8n integration too.
If you're a dev working on ecom, SaaS, or even internal tooling and need a non-annoying way to collect structured feedback, happy to chat or get you set up. Feedback welcome — especially critical stuff. Always looking to improve.
You upload interviews with family members (text, audio or video all work) and the system automatically transcribes the text, finds key people or events, and puts it together with other information you may have gathered about those events or people before. Like building a genealogical tree but with the actual details about people's lives.
In the works to also attach pictures of said people and events to give it some life.
My hope is to make it easier to use a computer blind than with my usual workflow with a monitor.
But working on it for past 7 months. It's running and I'm tweak/adding features while marketing it.
I recently impulse bought an Epson receipt printer, and I’ve started putting together a server in Go to print a morning update every day. Getting it to print the weather, my calendar and todos, news headlines, HN front page. Basically everything I pick up my phone for in the morning, to be on paper rather than looking at a screen first thing. Very early days but hacking away and learning escpos/go! (Vibecoding a lot of it)
1st published song, Piano Place Hold in Am: https://www.youtube.com/watch?v=EUOhb-wHdFQ
Building cables for multiple personal and professional projects, I was frustrated by having to cobble together harness diagrams in Illustrator or Visio, cut snippets from from PDFs for connector outlines, map pin-outs, wire specs, cable constructions, mating terminals, and manually updating an Excel BOM.
Splice gives you:
An SVG canvas to drag-and-drop any connector or cable from your library to quickly route and bundle wires. Assign signal names to wires or cable cores.
Complete part data Connector outlines, pin-outs, terminal selections (by connector family & AWG), cable core colors & strand counts, wire AWG/color.
Automated BOM & exports parts-ready diagrams, wiring drawings, and a clean BOM in SVG, PNG, or PDF.
Connector & Cable Creators. Connectors or cables not in the existing library can be added with an optional outline and full specs (manufacturer, MPN, series, pitch, positions, IP-rating, operating temp, etc.), then publish privately or share publicly.
Demos & tutorials: Harness Builder → https://www.youtube.com/watch?v=JfQVB_iTD1I
Connector Creator → https://www.youtube.com/watch?v=zqDsCROhpy8
Cable Creator → https://www.youtube.com/watch?v=GFdQaXQxKzU
Full tutorials → https://splice-cad.com/#/tutorial/
No signup required to try—just jump in and start laying out your harness: https://splice-cad.com/#/harness. If you want to save, sign up with Google or email/password.
Check out https://www.hi-harnesses.com/ - limited parts at this point but the closest thing I know of.
TestingBee is a way for startups to get part-time QA for their product's critical flows.
I've been working at startups for the last four years and I've consistently been on teams struggling to balance launching quickly versus keeping our product working. We've never had success creating a substantial test suite because our product is changing too fast and engineers are too overloaded.
I built testingbee as the solution. It lets you write your app's flows in plain english and the bot I created will execute those flows in your app as a user would. This triggers on every push to make sure every release keeps your product working :)
Architecture uses Traits (data) and Behaviors (logic) to implement things in the world model.
Currently working on getting filtering working and it might require me to change the model again significantly.
And no worries about "credentials" in the repo. It is all just dummy data.
Currently one needs to employ the Django admin to add data to the database. I might add another way later. Or an ability to import JSON files or something.
An iOS client for Cloudflare. Surprisingly, there’s none out there, maybe because nobody needs it? I do, so I’ve created one and it’s now available on TestFlight [0].
Another interesting thing I’ve recently discovered is that LLMs are pretty great at vetting tenancy agreements, so I’m working on a website that reads tenancy agreements and will return a list of unfair clauses that might be present in the contract along with a detailed explanation of how you should follow up with the landlord/agency. I still need to finish it but if you’re interested it’s here [1].
Play spot-the-difference with the old screenshot: https://github.com/Leftium/weather-sense#weathersense
- At least five major changes!
- Or look at the commit history ;)
---
I'm designing a game that:
- is simple to play. (just log in and check-in with your geolocation. Optionally add a short message)
- helps people stay connected. (You can view friends/family on the globe with some mild competition/cooperation)
- Right now, I'm trying to figure out something compelling to "collect." Cities/states, weather conditions, letters, numbers, words, etc... I think it should be tangible.
I am particularly enjoying the Stern-Brocot tree exploration: https://calc.ratmath.com/stern-brocot.html#0_1 I hope people will find it to be a nice way of understanding good rational approximations and how they tie into continued fractions and mediants. A nice exercise is to type x^2 in the expression box and go down the path to always advance towards x^2 being 2. This gives the continued fraction representation of the square root of 2.
Another Moby-Dick of mine is Kadessh, the SSH server plugin of Caddy, formerly known as caddy-ssh. This one is an itch. I wrote about it here https://www.caffeinatedwonders.com/2022/03/28/new-ssh-server..., and the repo is here: https://github.com/kadeessh/kadeessh. Similar to the other one, feedback and helping hands are sorely needed.
They are both sort of an obsession and itches of mine, but between dayjob and school, I barely have a chance to have the clear mind to give them the attention they require.
Still figuring out how to pitch it, but so far it's 'Duolingo for relationship issues'
We launched this month and are growing fast which is exciting. I'm mostly impressed by how easy React Native has gotten, as a long-time native Apple Platforms dev, given all the training LLMs have on React.
Node based visual editor for 2D LED patterns over BLE. Web/iOS/Android app to ESP32, works with most addressable LEDs. It’s like TouchDesigner x WLED x PixelBlaze, but Bluetooth so you don’t need annoying wifi setup. And hopefully you can make much more interesting patterns without touching any code.
Eventually the ESP32 devices will save all the patterns they’ve seen and share them with apps that connect to them. So there’s a pattern ecosystem, like Electric Sheep.
Still rough and in progress (and constantly deploying so it may break for you )
https://apps.apple.com/us/app/daily-optimist-think-positive/...
I wrote an MCP server in C#/.NET that let's LLMs safely generate an run JavaScript using the Jint interpreter.
It includes a `fetch` analogue using `System.Net.HttpClient`, as well as `jsonpath-plus`, and a built-in secrets manager.
The prime use case is working with HTTP REST APIs with an LLM. With this, you can let users safely generate and execute JavaScript in a sandbox.
This uses bad things (cmake-only, Debian policy agenda) things that work against their creators: cmake outputs enough information to create correct `pkg-config` for example.
This would make it realistic to zero-backdoor an Ubuntu-style system.
For 30 years Linus has been holding the line on a stable kernel ABI and only FAANGs and HFT shops have reaped the full benefits.
The goal is to make a Minecraft server that constantly updates itself, giving you "unlimited content", while still retaining any progress you've made so far.
It's called SmartSearch - uses SentenceTransformers for embeddings and FAISS for fast similarity search. Best of all, it runs locally on your computer.
Why? I absolutely despise Mac's search. I want to be able to search within documents, images, pdf etc.
Github: https://github.com/neberej/smart-search/
Demo: https://github.com/user-attachments/assets/aed054e0-a91f-459...
I've been thinking a lot about the current field of AI research and wondering if we're asking the right questions? I've watched some videos from Yann LeCun where he highlights some of the key limitations of current approaches, but I haven't seen anyone discussing or specifying all major key pieces that are believed to be currently missing. In general I feel like there's tons of events and presentations about AI-related topics but the questions are disappointingly shallow / entry-level. So you have all these major key figures repeating the same basic talking points over and over to different audiences. Where is the deeper content? Are all the interesting conversations just happening behind closed doors inside of companies and research centers?
Recently I was watching a presentation from John Carmack where he talks about what Keen is up to, but I was a bit frustrated with where he finished. One of the key insights he mentions is that we need to be training models in real-time environments that operate independently from the agent, and the agent needs to be able to adapt. It seems like some of the work that he's doing is operating at too low of an abstraction level or that it's missing some key component for the model to reflect on what it's doing, but then there's no exploration of what that thing might be. Although maybe a presentation is the wrong place for this kind of question.
I keep thinking that we're formulating a lot of incoherent questions or failing to clearly state what key questions we are looking to answer, across multiple domains and socially.
RAG and/or Fine-tuning is not the way.
Another topic is security, which would consist of using Ollama + Proxmox for example, but of course, right now, as emergent intelligence is still early, we would have to wait 2-3 years for ~8 B parameter local models to be as good as ChatGPT o3 pro or Claude Opus 4.
I do believe that we are close to discovering a new interface. What is now presenting itself through IDE’s and the command line (terminal)… I strongly believe we are 1-2 years away from a new kind of interface, that is not meant for developers only.
That feels like an IDE, works like a CLI, but is intuitive as Chrome is for browsing the web.
My day job required me to go into office frequently, and I'm really feeling the reduced social connection of being fully remote in a small company. Any suggestions how to deal with this? I'm planning to reconnect with old friends, surf a lot, go rock climbing, and maybe take dance / music / other classes. Would also love if anyone wants to work together in the same place (library, coffee shop, etc). I'm in Escondido California, but happy to drive ~30 min to meet folks.
Check out Eventship. Hussein is local to SD. You should also meet Fred for press.
I’ll try and remember about these in the winter. I need new booties anyways. How many mm? 2 plus 2 so 4?
Ya exactly, 2 layers of 2mm each, for a total of 4mm. They’re less warm than most 4mm booties would be though, because they’re intended for the protection. If you’re in SoCal that’s a feature — your feet should stay warm but not overheat :)
But if you want a balance of flexibility and stopping stingray stings, we really are the best. Nobody else is even trying, lol, the other options pretty much do nothing, or are encased in steel and not flexible at all.
I love SSGs as they’re simple and fast and the sites they make can be hosted anywhere with little maintenance. But, after helping a non-technical friend get up and running with one, the UX is rubbish.
So I’m building a combined CMS and SSG called Sparktype, designed for writing and publishing. Users can create pages or collections, write and export the generated site. At the moment it exports to zip, but I’m working on connecting to Netlify or GitHub for automatic deployment.
My goal is to build something that allows people to create a publication with the ease and polish of say, Medium or Substack, but which is completely portable and will work on almost any hosting.
It’s very early MVP - the editor works, but the default site theme is rough around the edges and there are a bunch of bugs. I’m currently working on getting it good enough so that I can create its own marketing and documentation site with it.
I’d love any thoughts or feedback you might have.
Building it in public.
This is written entirely in 6502 assembly, and uses a fun new mapper that helps a little bit with the music, so I can have extra channels you can actually hear on an unmodded system. It's been really fun to push the hardware in unusual ways.
Currently the first Zone of the game is rather polished, and I'm doing a big giant pixel art drawing push to produce new enemies, items, and level artwork to fill out the remainder of the game. It's coming along slowly, but steadily. I'm trying to have it in "trailer ready" / "demo" state by the end of this calendar year. Just this weekend I added new chest types and the classic Mimic enemy to spice things up.
https://github.com/BrokeStudio/rainbow-net/blob/master/NES/m...
In terms of capabilities, graphically it's something like MMC5 (8x8 attributes and a bunch of tile memory) while sound wise it's almost exactly VRC6. The real nifty feature though is ipcm: it can make the audio available for reading at $4011
It turns out the APU inside the NES listens to writes to $4011 to set the DPCM level, which many games use to play samples. By having the cartridge drive it for reading, I can very efficiently stream one sample of audio with the following code:
inc $4011
So I just make sure to run that regularly and hey presto, working expansion audio on the model that doesn't normally support it. It aliases a little bit, but if I'm clever about how I compose the music I can easily work around that.flat planes and edges : https://youtu.be/-o58qe8egS4
semi-cylinder pipes : https://youtu.be/8fjHNDGKeu4
Aim to automate that TAM of 5Bn/yr of manual labor, growing at 12% cagr
SOM : ~100Mn
Document translator that keeps layout and formatting
— it turns ebooks, articles, and documents into synchronized audio with real-time text highlighting. It’s great for people who prefer listening while reading (or want to stay focused), and it works fully offline with a one-time purchase — no subscriptions.
I’m bootstrapping it and trying to figure out how to market it effectively. So far, I’ve had some traction and early sales just by posting on Reddit, but I’m still learning the marketing side — especially how to reach people who’d benefit from it most.
Would love to hear how others approached early growth for similar bootstrapped tools.
I wouldn't need / want this for reading English, but it'd be killer for improving my Spanish vocab and speech-recognition. It's a great idea, and lots of people could get a lot of value out of it. Well done!
The free Shopify directory (240k stores and 580m products at the moment).
Currently I'm stuck implementing a storage combinator with EiffelWebFramework[4]
[0] https://dl.acm.org/doi/abs/10.1145/3359591.3359729
[1] https://scholar.google.com/citations?view_op=view_citation&h...
[2] https://en.wikipedia.org/wiki/Eiffel_(programming_language)
Github repo has a link to what I plan to make a series of blog posts I started writing about it
The language is heavily inspired by Python for the dev UX, and the interpreter is written in RPython (what Pypy uses). Rewriting to RPython was tedious, but the 80x speedup was worth it.
Chrome web store link: https://chromewebstore.google.com/detail/n8n-copilot-chat-wi...
The thing is, we’ve been retrofitting software made for humans for machines, which creates unnecessary complications. It’s not about model capability, which is already there for most processes I have tested, it’s because systems designed for people are confusing to AI, do not fit their mental model, and making the proposition of relying on agents operating them a pipe dream from a reliability or success-rate perspective.
This led me to a realization: as agentic AI improves, companies need to be fully AI-native or lose to their more innovative competitors. Their edge will be granting AI agents access to their systems, or rather, leveraging systems that make life easy for their agents. So, focusing on greenfield SaaS projects/companies, I've been spending the last few weeks crafting building blocks for small to medium-sized businesses who want to be AI-native from the get-go. What began as an API-friendly ERP evolved into something much bigger, for example, cursor-like capabilities over multiple types of data (think semantic search on your codebase, but for any business data), or custom deep-search into the documentation of a product to answer a user question.
Now, an early version is powering my products, slashing implementation time by over 90%. I can launch a new product in hours supported by several internal agents, and my next focus is to possibly ship the first user-facing batch of agents this month to support these SaaS operations. A bit early to share something more concrete, but I hope by the next HN thread I will!
Happy to jam about these topics and the future of the agentic-driven economy, so feel free to hit me up!
Scrollable social network where the user generated content is microgames.
https://www.inclusivecolors.com/
The idea is it helps you create palettes that have predictable color contrast built-in, so when you're picking color pairs for your UI/web design later, it's easy to know which pairs have accessible color contrast.
For example, you can design your palette so that green-600, red-600, blue-600, all contrast against grey-50, and the same for any other 600 grade vs 50 grade color, like green-600 vs green-50.
That way you won't run into failing color contrast surprises later when you need e.g. an orange warning alert box (with different variations of orange for the background, border, heading text and body text), a red danger alert box, a green success alert box etc. against different color backgrounds.
From a technical side, I've processed around 325k+ matches. Right now, only main ATP / WTA matches (no challengers, no doubles, no mixed) sadly. I'm working on expanding that, improving our infra layout, exposing a public facing API, collecting the data on my own, and most importantly live score ingestion (especially given the fact that Wimbledon is starting tomorrow).
Feedback on the app through Canny / joining the Discord / following the Twitter / or any and all of the above would be much appreciated.
Would love to know what you think.
Want to test it out? Sign up to the waitlist at https://brice.ai and I'll give you access tomorrow.
jq is an incredibly powerful tool, but it's not always the easiest tool to use. LLM's are remarkably good at constructing filters for most uses cases, but for people that work with JSON a lot, learning jq can be real benefit.
FOSS toolkit for SRS and adaptive tutoring systems. Inching closer to proper demos and inviting usage.
In essence, I'm looking to decouple ed-tech content authoring (eg, a flash card, an exercise, a text) from content navigation (eg, personalizing paths and priorities given individual goals and demonstrated competencies), allowing for something like a multi-sided marketplace or general A/B engine over content that can greatly diminish the need to "build your own deck" for SRS to be effective.
Project became my main focus recently after ~8 years of tiny dabbling, and I've largely succeeded at pulling spaghetti monolith into a sensible assembly of packages and abstractions. EG, the web UI can now pull from either a 'live' couchdb datalayer or from statically served JSON (with converters between), and I'm 75% through an MVP tui interface to the same system as well.
You're probably looking for "showing it to me" or "making me aware of it" rather than "noticing it to me" as noticing is usually used like "I noticed thing x" or "You have been noticed"
I also have some ideas of a programming language designed mainly to process files in DER format (as well as data from stdin and to stdout), but have not actually implemented anything so far.
I also have ideas about an operating system design and computer design, and should have help to write the specification properly, and then it can be implemented afterward.
I didnt realize how much overhead an sfml window draw call has, granted I have yet to target optimizing that yet.
Seems like my first candidate for multithreading; also I think the scheme I implemented for how to manage texture/sprite switching is advised against and may need to slightly refactor how I store and swap based on object state.
Yeet
https://github.com/turbolytics/sql-flow
It has some interest, unfortunately building tools as a business strategy is rough.
Beginning to work on first actual product! More soon :)
The library of public domain classics is courtesy of Standard Ebooks. I publish a book every Saturday, and refine the EPUB parser and styler whenever they choke on a book. I’m currently putting the finishing touches to endnote rendering (pop-up or margin notes depending on screen width) so that next Saturday’s publication of “The Federalist Papers” does justice to the punctilious Publius.
Obligatory landing page for the paid product:
Recently many companies have fallen victims to hiring NK workers and losing millions of dollars. There are few red flags to identity these candidates and avoid becoming a victim.
The site itself is built with Astro, content is written in Markdown. It's still very much a work in progress: the design’s evolving, search isn’t done yet, and I’ve only scratched the surface with a handful of categories out of the dozens I have planned.
Simple license, no subscription, perpetual license with 2 years of updates.
It's something I've needed for a while working in engineering teams in B2B SaaS. Currently technical co-founder of AdQuick.com, an outdoor advertising marketplace backed by Initialized.
Interested in collaboration, feedback, and all other things.
Started as a very simple app for me to play around with OpenAI’s API last year then morphed into a portfolio project during my job search earlier this year. Now happily employed but still hacking on it.
Right now, a user can create a quiz, take a quiz, save it and share the quiz with other people using a URL.
Demo: You can try out the full working application at https://quizknit.com
Github Links: Frontend: https://github.com/jibolash/quizknit-react , Backend: https://github.com/jibolash/quizknit-api
Here's the summary: - read all your sources - public websites, docs, video - answer questions with confidence score and no hallucinations with citations - cut support time and even integrates directly into your customer facing chatbots like Intercom
Still deliberating on the business model. If anyone would be interested in taking a look, I would love to show you.
I got tired of using the AWS console for simple tasks, like looking up resource details, so I built a fast, privacy-focused, no-signup-required, read-only, multi-region, auto-paginating alternative using the client-side AWS JavaScript SDKs where every page has a consistent UI/UX and resources are displayed as a searchable, filterable table with one-click CSV exports. You can try a demo here[1]
[1] https://app.wut.dev/?service=acm&type=certificates&demo=true
- the subheading is describing the “how” not the “what”. Meaning, what would you use this product for?
- in general, all the headlines could be preposition from the “what” a user would do scenario. Eg instead of saying “Resource Relationship Diagrams” … say “See Resource Relationship with Ease”
- if I’m understanding the tool correctly, this seems like a “lookup” tool. In which case lookup.dev is for sale … just fyi.
So, I embarked a couple of weeks ago on my journey to build a relational database, which checks the boxes for me personally and I hope that this will be useful for other developers as well.
Project priorities (very early stage): - run code where the data is - inside of the database with user defined functions (most likely directly rust and wasm) - frontend to directly query the database without the risk of injection attacks (no rest, graphql, orms, models and all the boilerplate in between) - can be embedded into the application or runs as a standalone server - I hope this to be the killer feature to enable full integrations tests in milliseconds - imperative query language, which puts the developer back in control. Instead of thinking in terms of relational algebra, its centered around the idea of transforming a dataframe
Or in other words, I want to enable single developers or small teams to move fast, by giving them an opensource embeddable relational firebase.
If you have any thoughts on that, I would love to talk to you.
Saturated market riddled with alternatives, but I wasn't really able to find low friction way to collect these things that met all my needs. Most of this stuff gets lost in DMs or comment sections, which just wasnt working for me.
Also figured it would be a neat way to re-think paying for a creators attention. IE, giving the option to tip (and soon subscribe to a VIP inbox of sorts).
I release code into the public domain hoping it will be useful. There's some fast code for Groebner basis computations using the F4 algorithm (parallelized - article to follow), and some routines for machine integers e.g. discrete logarithm, factoring, and prime counting.
https://github.com/WillAdams/gcodepreview
Currently finishing up a re-write which changes from using union commands (which resulted in an ever more deeply nested CSG tree) to collecting everything in a pair of lists using append/extend and then applying one each union operation, resulting a flatter structure.
Once all that is done I'm hoping to add support for METAFONT/POST curves....
At any given time, she’s working with any number of clients (directly or subcontracted, solo or as part of a team) who each have multiple, simultaneous marketing campaigns across any number of channels (google/meta/yelp/etc), each of which is running with different parameters. She spends a good amount of time simply aggregating data in spreadsheets for herself and for her clients.
Surprisingly we haven’t been able to find an existing service that fits her needs, so here I am.
It’s been fun for me to branch out a bit with my technology selections, focusing more on learning new things I want to learn over what would otherwise be the most practical (within reason) or familiar.
It's called Heap. It's a macOS app for creating full-page local offline archives of webpages in various formats with a single click.
Creates image screenshot, pdf, markdown, html, and webarchive.
It can also be configured to archive videos, zip files etc using AppleScript. It can do things like run JavaScript on the website before archiving, signing in with user accounts before archiving, and running an Apple Shortcut post archiving.
I feel like people who are into data hoarding and self host would find this very helpful. If anyone wants to try it out:
https://apps.apple.com/ca/app/heap-website-full-page-image/i...
Runs a cron daily, no manual work needed. Had fun building this.
https://apps.apple.com/us/app/percento-net-worth-tracker/id1...
While Cursor stops after writing great code, Vide goes the extra mile and has full runtime integration. Vide will go the extra mile & make sure the UI looks on point, works on all screen configurations and behaves correctly. It does this by being deeply integrated into Flutters tooling, it's able to take screenshot/ place widgets on a Figma-like canvas and even interact with everything in an isolated and reproducible environment.
I currently have a web version of the IDE live but I'm going to launch a full native desktop IDE very soon.
My value proposition is to make developers more productive by skipping the boring stuff, while FlutterFlow is more of an "all-in-one" app platform.
Beyond the landing page (built with Astro), I've been building all of the route optimization, the delivery and warehouse management systems. A combination of go and java has allowed me to write a few microservices in the past 6 months to handle all of my logistical processes, and I'm just testing the mobile app in the field as we speak! I hope to make some of the code open-source one day!
A residential proxy network that leverages blockchain by turning everyday users home connections who have no contracts against such practices, into rentable exit nodes, each contributing bandwidth in exchange for rewards. A dedicated blockchain ledger tracks the exact amount of data each node relays and automatically releases micropayments in the network’s native cryptocurrency, ensuring transparent, real-time compensation without a middleman.
But with my adhd, I'll likely end up working on another project sooner than later. Interested in MCP aggregation.
Just made the first devlog video: https://youtu.be/CFgDlAthcuA
Can be used for everything from slightly skewed beat-making to generating undulating waves of sound!
I've added a few exclusive features to one of my extensions for subscribers in addition to settings syncing, and have auth and Stripe redirects and webhooks working, so now at the stage of working out the best heuristics to use for when to sync and connecting the extension to the settings API.
I store the chunks in a custom-built database (on top of riak_core and Bitcask), and I have it automatically also make an HLS stream as well. This involved remuxing the AAC chunks into MPEG-TS and dynamically create the playlist.
It's also horizontally scalable, almost completely linearly. Everything is done with Erlang's internal messaging and riak_core, and I've done a few (I think) clever things to make sure everything stays fast no matter how many nodes you have and no matter how many concurrent streams are running.
Local-first web applications with a compiled backend – After eight years working on web platforms, the conventional stack feels bloated. The client already defines what it wants to fetch or insert. Usually through queries. So why not parse those queries and generate the backend automatically (or at least, the parts that can be)?
Triple stores as a core abstraction – I’ve been thinking about using a triple-based model instead of traditional in-memory data structures, especially in local-first apps. Facts could power both state and logic, and make syncing a lot simpler.
Lower-level systems programming – I’ve mostly worked in high-level languages, but lately I’ve been writing C libraries (like hash maps) and built a minimal 32-bit bare-metal RISC-V OS.
It’s all still brewing, but I think these ideas tie together nicely. What if the OS didn’t have a file system and just a fact store? Everything could be queried and updated live, like a Lisp machine but built on facts.
Some other things I’ve been playing with:
A jQuery-like framework and element factory - You can pass signals that automatically updates the DOM.
A Datomic-like database on top of OPFS - where queries become signals that react to new triples as they enter the system. Pairs well with the framework above.
Trying to see how far inference can go given that queries usually specify this information (ex: where(r => r.author == $SESSION.AUTHOR_ID)).
Would love to boot on a physical machine eventually though! If you have suggestions, happy to hear them :)
https://github.com/whyboris/Video-Hub-App & https://videohubapp.com/
If you have videos you want to browse, preview, search, tag, sort, etc on your computer, my software might be great for you :)
Even though I made it as a toy/proof of concept, it's turned out to be pretty useful for small to medium size projects. As I've used it, I've found all kinds of interesting benefits and helpful usage patterns. I've tried to document some; I hope to do more soon.
--https://rethinkingsoftware.substack.com/p/the-joy-of-literat...
--https://rethinkingsoftware.substack.com/p/organic-markdown-i...
--https://rethinkingsoftware.substack.com/p/dry-on-steroids-wi...
--https://rethinkingsoftware.substack.com/p/literate-testing
--https://www.youtube.com/@adam-ard/videos
The project is at a very early stage, but is finally stable enough that I thought it'd be fun to throw out here and see what people think. It's definitely my own unique spin on literate programming and it's been a lot of fun. See what you think!
It's an environment for open-ended learning with LLMs. Something like a personalized, generative Wikipedia. Has generated courses, documents, exams and flashcards.
Each document links to more documents, which are all stored in a graph you grow over time.
Do you have any socials? Would love to keep up with updates about this project
No socials so far as I've mostly been posting updates on the Anthropic discord. But I made an X account for it just now (@periplus_app) where I'll mirror the updates.
You can also reach me any time by email for bug reports, feature reqs etc.
I'm thinking you could have it in the same interface eventually, but right now all the machinery & prompts assume it's decomposable declarative knowledge.
A few courses I generated using above:
- https://dev.to/freakynit/network-security-cdn-technologies-a...
- https://dev.to/freakynit/aws-networking-tutorial-38c1
- https://dev.to/freakynit/building-a-minimum-viable-product-m...
It is like MS Word "Bullets and numbering" but it's a small UNIX filter, no GUI, much faster and smoother than MS Word or Google Docs.
Perhaps the beginning of a markup language for text or HTML files intended to be converted to MS Word.
like, it would be very cool to do something like have your feature branch be deployed to a separate pod in dev cluster, and have an ingress rule set up so that it points to that pod only.
So if your dev environment usually points to <some-app>.dev.example.com,
Deploy your feature branch to a dev cluster, but on a different pod. Then have it reachable to <some-app>.feature-branch-1.dev.example.com without touching main.
I think it's a neat idea and I'm sure it should be possible if I configure some istio settings.
It's all new thing and it's fun to have a direction towards learning
The premise is that when I read social spaces like Reddit or X, if the government has done anything contentious you get nothing more than strident left takes, or strident right takes on the topic. Neither of which is informative or helpful.
So I have set up a site which uses AI which is specifically guided to be neutral and non-partisan, to analyses the government actions from the source documents. It then gives a summary, expected effect, benefits and disadvantages, and ranks the action against 19 "things people care about" (e.g. defence, environment, civil liberties, religious protection, etc.)
The end result is quite compelling. For example here's the page that summarises all the actions which are extremely beneficial or disadvantageous to individual liberties: https://sivic.life/tyca/tyca_individual_liberties/
https://www.npmjs.com/package/@mindpilot/mcp
Claude Code Quickstart:
``` claude mcp add mindpilot -- npx @mindpilot/mcp ```
Been developing this AI agent framework for 1 year now. It's very similar to n8n, but exclusively for open-source LLMs. It also just recently got MCP support.
The project is https://kdeps.com
1. Open-Source AI Curriculum Generator(OSS MathAcademy alternative for other subjects) Think MathAcademy meets GitHub: an AI system that generates complete computer science curricula with prerequisites, interactive lessons, quizzes, and progression paths. The twist: everything is human-reviewed and open-sourced for community auditing. Starting with an undergrad CS foundation, then branching into specializations (web dev, mobile, backend, AI, systems programming).
The goal is serving self-learners who want structured, rigorous CS education outside traditional institutions. AI handles the heavy lifting of curriculum design and personalization, while human experts ensure quality and accuracy.
2. Computational Astrology as an AI Agent Testbed For learning production-grade AI agents, I’m building a system that handles Indian astrology calculations. Despite the domain’s questionable validity, it’s surprisingly well-suited for AI: complex rule systems, computational algorithms from classical texts, and intricate overlapping interpretations - perfect for testing RAG + MCP tool architectures.
It’s purely a technical exercise to understand agent orchestration, knowledge retrieval, and multi-step reasoning in a domain with well-defined (if arcane) computational rules.
- Has anyone tackled AI generated curricula? What are the gotchas? - Interest in either as open-source projects?
2 projects worth checking out here: https://github.com/kamranahmedse/developer-roadmap (open-sourced roadmaps, no course content) and also https://github.com/ossu for more college curricula level (with references to outside courses).
I've been personally working on AI generated courses for a couple of months (probably will open source it in 1–3 months). I think the trickiest part that I haven't figured out yet is how to kind of build a map of someone's knowledge so I can branch out of it, things like "have a CS degree" or "worked as a Frontend Dev" is a good starting point, but how to go from there?
I really like how Squirrel AI (EdTech Company) breaks things down — they split subjects into thousands of tiny “knowledge points.” Each one is basically a simple yes/no check: Do I know this or not? The idea makes sense, but actually pulling it off is tough. Mapping out all those knowledge points is a huge task. I’m working on it now, but this part MUST be open source
btw, feel free to email me to bounce ideas or such (it's in my bio)
It synthesizes unusual market activity, insider moves, options flow, sentiment, technical and news analysis to deliver specific, actionable trade setups.
This is only good for paper trading, as most of the setups are very counterintuitive. You won't be able to execute them, and if you did try, you would end up losing sleep and your health even when you are correct.
https://catskull.net/podcast https://podcasts.apple.com/us/podcast/interrobang-with-dave-...
I built the whole tech stack with Jekyll and Cloudflare and wrote about it on my blog: https://catskull.net/podcast-workflow.html
Finally, I built a simple chat app as a web component with a Cloudflare durable object and have a few AI bots spamming the chat that may or may not ignore you: https://catskull.net/the-most-dangerous-app.html
This's a beginner friendly arxiv paper exploration platform but with powerful feature to select multiple papers and get AI analysis and comparison.
It creates all the necessary boilerplate to generate PHP Docker containers, creates all of the MySQL users, and sets up all of the directory structures to get a new website up and running. It even helps set up SFTP users and gets letsencrypt certificates set up with certbot.
It's still very early days, but I appreciate that what used to be a bunch of commands that I would run by hand and slightly change every few months is now pretty much just all self contained. Should mean the next migration to a different server is easier.
Created in frustration because I was too cheap to pay the $50/month for a cPanel license.
Next up is a small lamp for migraines. I noticed that dim red light is much more tolerable to me than anything else. I mean obviously, darkness is ideal, but you need to do other stuff like eat and drink eventually if it's a persistent one.
So I designed a quick circuit to use fast PWM (few Mhz, so no flicker) to control a big red LED. I'd like it to be sturdy and still functional in 50-100 years, so made some design choices for long-term durability. No capacitors, replaceable LED and so on.
A simple project, but it's a busy month and I need something easy this time.
But the environment made it hard to move fast. The systems were outdated, and there wasn’t much support for building AI tools in-house. That experience made me realize I needed to grow beyond the modeling layer. There were things I wanted to build, but I didn’t yet have the full skill set to do it on my own.
So I’ve been learning full stack development. I had built a small chatbot app before, but this time I’m applying what I’m learning toward a focused MVP for the inspection work. It’s been a practical way to connect what I know with what I want to make real.
Custom high performance C++ / OpenGL/WebGL engine. Uses Jolt physics and Luau and Winter scripting.
It's a lot of fun and pretty challenging code.
Folks have reached out about having an 'In Loving Memory Of' site for their loved ones, so I'm turning this into a side business to help out more with my (now widowed) father's retirement and care.
Same as people saying things like "Don't say no one loves you, because I love you <3" but it's in a forum like this, or on Reddit. You don't know them. you don't love them.
Or did they just short circuit. "Dead relative -> Say sorry for your loss". Like an AI bot.
It's the second one.
Lets you create encrypted containers disguised as normal files. 1000s of images, pdfs, videos, secrets, keys all stuffed into an innocent look "Vacation_Summer_2024.mp4".
I've almost got true steganography working i.e to get the carrier file to actually open in any file system(currently with mp4, pdf, png and jpeg).
Things like this have existed in the past, but nothing with a simple UI,recent encryption standards.
Low friction Markdown based voice journaling. Locally transcribed voice memos with whisper and write as markdown files (to any folder or obsidian vault).
Also it’s been a fun excuse to try out Cursor and other AI tools I don’t normally use in my day job.
I have 1 user - my 8 yr old son.
I don’t add any ads.
Ideally, making rent as an open source developer.. any help appreciated. :-)
Locally running wispr flow equivalent without any tracking, signup, analytics or subscriptions.
Dictate into any text window on your Mac. Works really well with technical language specifically when using with claude code, cursor, windsurf.
Very fast since the underlying whisper.cpp lib is very well optimized for Metal and CoreML usage on Apple Silicon machines.
coming up with intern projects is roght difficult nowadays
Long-term, passion project of mine - I'm hoping to make this the best typing platform. Just launched the MVP last month.
The core idea of the app is focusing on using natural text. I don't think typing random words (like what some other apps do) is the most effective way to improve typing.
We offer many text topics to type (trivia, literature, etc) where you type text snippets. We offer drills (to help you nail down certain key sequences). We also offer:
- Real-time visual hand/keyboard guides (helps you to not look down at keyboard) - Extremely detailed stats on bigrams, trigrams, per-finger performance, etc. - SmartPractice mode using LLMs to create personalized exercises - Topic-based practice (coding, literature, etc.)
I started this out of passion for typing. I went from 40wpm to ~120wpm (wrote about it here if you're interested: https://www.typequicker.com/blog/learn-touch-typing) and it completely changed my perspective and career trajectory. I became a better programmer and writer because I no longer had to think about the keyboard, nor look down at it.
Currently, we're doing a lot of analysis work on character frequencies and using that to constantly improve the SmartPractice feature. Also, exploring various LLM output testing/observability tools to improve the text generation features.
Approaching this project with a freemium model (have paid AI powered features; using AI to generate text that targets user weakpoints) while everything else in the app is completely free. No ads, no trackers, etc. (Hoping to have sufficient paid users so that we can run the site and never have to even think about running ads).
I've received a lot of feedback and am always looking for ways to improve the site.
1. AI interactions cost the service money, which is inevitably passed on to the consumer. The if it's a feature I do not wish to use, I like to have options to avoid paying for that feature. So in this case, avoiding AI use is a purely economic decision.
2. I am concerned about the content LLMs are trained on. Every major AI has (in my opinion) stolen content as training material. I prefer not to support products which I believe are unethically built. In the future, if models can be trained solely on ethically sourced material where the authors have been properly compensated, I would think this position.
I've experimented in the past with running an LLM like this on a CPU-only VPS, and that actually just works.
If you host it on a server with a single GPU, you'll likely be able to easily fulfil all generation tasks for all customers. What many people don't know about inference is that it's _heavily_ memory bottlenecked, meaning that there is a lot of spare compute left over. What this means in practice is that even on a single GPU, you can serve many parallel chats at once. Think 10 "threads" of inference at 20 Tok/s.
Not only that, but there are also LLMs trained only on commons data.
Yeah, LLMs are indeed really good for this use case.
> That said. I would love to see a middle tier pricing which had some features but avoided the AI use.
Only paid features are AI features. Everything else is free and no ads :)
You can type anything and as much as you want, you have access to all the advanced stats, you can create a custom theme from a photo of your keyboard, etc.
Everything but AI features is free right now. (Might change in future as we’re adding a lot more features so we will definitely consider a mid tier price )
Dunno about the trigrams though, mostly it's on the "token group" level for me - either the upcoming lookahead feels familiar or it doesn't, and I don't much get bothered by the specific letters as much as "oh I don't have muscle memory on that word, and it's sadly nestled between two easy words, so it's going to be a patchy bit of alternating speed".
> Dunno about the trigrams though, mostly it's on the "token group" level for me - either the upcoming lookahead feels familiar or it doesn't, and I don't much get bothered by the specific letters as much as "oh I don't have muscle memory on that word, and it's sadly nestled between two easy words, so it's going to be a patchy bit of alternating speed".
Could you elaborate a bit on this part - not sure I fully follow.
The trigrams/bigrams is mostly to help the user discover if there are some patterns that really slow them down or have a lot of mistakes. This is something I wanted that I didn’t see in any other apps.
This also what we use under the hood for SmartPractice weak point identification. We look at what the most relevant character sequences (for example the ta sequence is way more common than za) are and what the user struggles with the most. This is just one of the weak points we use in the user weakness profile.
> One piece of feedback and a gripe I have with a lot of these is that missed or extra characters throw off the entire next sequence and essentially require backing up to deal with them, as opposed to wrong characters which are fine to just be mistakes you move on from. It'd be great to have some detection for when the user is continuing that re-aligns their string.
Thank you for the feedback! I’m not entirely sure I can visualize exactly what you mean by this:
> It'd be great to have some detection for when the user is continuing that re-aligns their string.
Could you give an example of this?
I curious because I’ve been exploring alternative and unique UI ideas for typing practice so this could lead me into a new direction
> according to its archive...
Let's say I mistype and don't double the first "c", but otherwise type entirely correctly.
> acording to its archive...
This would be counted as having everything wrong except the first 2 characters, which doesn't feel like a good reflection of my accuracy.
I know this is a hard problem because I don't think there's any simple guaranteed way to re-align the string to account for a possible deletion or insertion, particularly if there are more mistakes in the following text, but finding and using some sort of accuracy-maximizing alignment would be great to have.
Yes - this is a very, very good point and something I actually spent so much time analyzing how I could implement a solution to this.
I think I spent over a week at one point. I refer to this issue as an off-by one or off-by two inaccuracy. Just as in the example you provided, the user only misses one character but types the rest of the word correctly (however because they missed one, the whole word is not mistyped).
This is indeed a very hard problem and in addition to the example you provided there are many other cases where this type of off-by one (or off-by two) mistyping can occur. At this time, I've put that problem on hold. I tried a few solutions but my friends said the UI was too confusing - the general initial feedback I received is to just keep typing as natural as possible; no stopping a user when they make a mistake, no guard-rails of any kind. Just mimic real typing as much as possible.
Issue is, it's one thing to implement the solution to this but another is how to correctly display this to the user. In essence, the text is just a collection with each character having an index. Per each character we measure everything; milliseconds taken to type, errors made for that character, whether it was corrected or not, etc. But if we're handling off-by one or off-by two, displaying this to the user in a non-confusing way is really hard. UX is hard haha
1. Typing uppercase characters counts as a mistake
I'm not sure how that got to be the case, but somehow typing an uppercase letter instead of the lowercase is a mistake, despite the fact that sentences start with a lowercase letter. This conflicts with my muscle memory of starting sentences with a capital letter.
2. WPM is not a useful metric on its own
WPM can rise and fall depending on the length of the word. The bigger the word the less likely you are to type that word correctly from muscle memory, so the speed drops. The speed also drops due to the word being longer. I believe having both metrics would yield more useful data, such as when do you slow down etc.
Speaking of which, there are some more statistic things that could help, like measuring how fast you are at fixing the mistakes, or measuring three-letter combinations instead of two-letter combinations, because the context of the third letter might help, but you do need more data to gain a statistically significant result. Maybe trying to classify mistakes by the side of keyboard they happen on -- i.e. are they simple typos or a miscoordination of your hands.
---
Also, as pointed out by another commenter, hands also threw me off. I've been observing them and it's interesting that I don't use my little finger for the left row -- it's used in case I need to press shift.
> 1. Typing uppercase characters counts as a mistake. I'm not sure how that got to be the case, but somehow typing an uppercase letter instead of the lowercase is a mistake, despite the fact that sentences start with a lowercase letter. This conflicts with my muscle memory of starting sentences with a capital letter.
So if you click on the topics (or whatever mode you're on), you will see the Options menu on the side. Capitalization is off by default but you can flip that back if you prefer capitalization. I've had folks request that capitalization be off by default hence the current state but I might change the default settings.
> 2. WPM is not a useful metric on its own
All typing sites generally use the same formula to calculate WPM - the length of the word doesn't matter. Most (pretty much all sites I've tried) sites use this formula: https://www.speedtypingonline.com/typing-equations. By all typed entries it's characters in this case. So it always assumes a length of 5 (avg. word length) and that's how it's calculated acorss all typing sites.
We have VERY detailed metrics. I may add a CPM toggle (toggle between both) but it seems most people prefer WPM as that's what they're used to on other sites.
> measuring three-letter combinations instead of two-letter combinations,
We measure both - see trigrams tab in the stats section.
> Also, as pointed out by another commenter, hands also threw me off. I've been observing them and it's interesting that I don't use my little finger for the left row -- it's used in case I need to press shift.
The hands are mostly there for folks learning correct touch typing practice - it's based on the most recommended general guidance for touch typing. It can be toggled off with the hand-icon button :)
Feel free to download here:
https://apps.apple.com/app/learnmathstoday/id6740993744
https://play.google.com/store/apps/details?id=com.learnmaths...
The goal is to make the code better organized, easier to read, maintain and extend.
The goal is to have a full featured editor with tree-sitter and LSP support which source code you can read through in one evening.
Love how it's going so far, I'm trying to keep it both minimal and easily extendable.
Have created a real-time media mixing mobile app that helps to setup TV grade Live channel on Youtube/Facebook/Twitch/Instagram.
Our product scales from individual to institutions, camera in mobiles to network of cameras, indoor to outdoor sports and events.
Details: https://www.cheerarena.com/
Realtime mixing studio - https://play.google.com/store/apps/details?id=com.cheerarena...
I'm building Mochi, a small programming language with a custom VM and a focus on querying structured data (CSV, JSON, and eventually graph) in a unified and lightweight way.
It started as an experiment in writing LINQ-style queries over real datasets and grew into a full language with:
- declarative queries built into the language
- a register-based VM designed for analysis and optimization
- an intermediate representation with liveness analysis, constant folding, and dead code elimination
- static type inference, inline tests, and golden snapshot support
Example:
type Person {
name: string
age: int
}
let people = load "people.yaml" as Person
let adults = from p in people
where p.age >= 18
select { name: p.name, age: p.age }
for a in adults {
print(a.name, "is", a.age)
}
save adults to "adults.json"
The long-term goal is to make a small, expressive language for data pipelines, querying, and agent logic, without reaching for Python, SQL, and a half-dozen libraries.Happy to chat if you're into VMs, query engines, or DSLs.
This is exactly the kind of thing I've had in mind as one of the offshoots for PRQL for processing data beyond just generating SQL.
I'd love to chat some time.
Reading through the Terms of service in websites is a pain. Most of the users skip reading that and click accept. The risk is that they enter into a legally binding contract with a corporation without any idea what they are getting themselves into.
How it started: I read news about Disney blocking a wrongful death lawsuit, since the victim agreed to a arbitration clause when they signed up for a disney+ trial.
I started looking into available options for services that can mitigate this and found the amazing https://tosdr.org/en project.
That project relies on the work of volunteers who have been diligently reading the TOS and providing information in understandable terms.
Light bulb moment: LLM's are good at reading and summarizing text. Why not use LLMs for the same. That's when I started building tosreview.org. I am also sending it for the bolt.new hackathon.
Existing features: Input for user entered URLs or text Translation available for 30+ languages.
Planned features: Chrome/firefox extension Structured extraction of key information ( arbitration enforced , jurisdiction enforced etc).
Let me know if you have any feedback
How does your product do in the age of AI?
I could imagine this could be sold to a whatever-legal-tech company, or maybe to a compliance company or similar.
The ADHD-friendly AI personal assistant for notes, email, and calendar.
Where you can just chat to search notes, manage emails, and schedule tasks. It proactively plans your day every moring and checks in to help you stay on top of everything.
More specifically, I'm trying to use pitch (F0) to dynamically adjust the theta parameter in rotary positional embeddings, so the frequency of the positional encoding reflects the underlying pitch contour of the speech and instead of using a fixed unit circle (radius=1.0) for complex rotations, I'm trying to work out how to use variable radii derived from the pitch. The idea is to create acoustically-weighted positional encodings, where the position reflects the acoustic salience in the original audio. https://github.com/sine2pi/asr_model
My film got screened at the Academy Award-qualifying Bali International Film Festival and the Marina Del Rey Film Festival in the past month. It will be screening next month in New York City at the Asian American International Film Festival.
Otherwise, I'll let you know once it's widely available.
Here's the summary: - read all your sources - public websites, docs, video - answer questions with confidence score and no hallucinations with citations - cut support time and even integrates directly into your customer facing chatbots like Intercom.
Still deliberating on the business model. If anyone would be interested in taking a look, I would love to show you.
- webbhook triggered - when a document is updated some CMS/tool provide webhooks triggering capability, which you can use to reindex that page - time based triggers - you can set a time like a cron and the document will be scanned in that time and checked if something has changed it will be reindexed
Happy to answer more questions.
I am working on the world's first end-to-end Database Migration tool, supporting Oracle to PostgreSQL and MSSQL to PostgreSQL database migrations with AI for Schema Migrations. Until now, people used different tools for Schema Migration and Data Migration/Replication. During this process, we ended up building a data migration and replication tool supporting any databases between Oracle, SQL Server (MSSQL) and PostgreSQL databases.
My own action MMORPG (think Mordhau meets Cyberpunk meets Arma 3). It's the perfect application of everything I already know as a platform engineer, and I get to learn all the things I don't. I'm making the client foss, the assets foss, and the gameplay compelling as all getout. Non-sharded, persistent world, with different lands for different real world regions. It's a type of metaverse in truth, but some of that part I have to flesh out better on the local client side where you can do whatever, but on the server there is a storyline.
I almost applied to YC because I'm at the stage I'm close to public alpha and need funding, but instead I'm planning on crowdfunding, but the release strategy has to be tops. I'm also doing things like planning on how to scale the business itself, lots of work on the over time growth profit model, etc. So basically, instead of a thousand side projects, I have one giant project where I get to do everything with my own theorycrafting - after years of being stuck doing whatever the boards/c-suite needs, it's a taste freedom and a dream.
Been working on it since 2013...
It has gone through several iterations over the last year. It was initially focused on file compression & editing but I have added video & image enhancement, background removal, smart video trim, video subtitles generation, dubbing, watermark removal, cropping, resizing, etc.
I'm continuing to fine-tune the performance and while enhancing my UI skills to polish the studios. I built a desktop version but currently released it for Linux (it's in beta), I plan to hopefully make the desktop version free.
I'm currently working with a few clients and using their feedback as guidance. Let me know your feedback if you use it.
After spending many years on the VC/startup track I found myself being pulled towards doing something more inline with my faith. As an engineer I felt like this is the best way I could contribute my skills.
- Create your own PDF editor with custom UI with the help of public methods which are exposed in the web component.
- You can add dynamic variables/data to the templates. What this means is you create one template, for example, a certificate template with name and date as variables and all you have to do is upload your CSV / JSON of names and dates, and it will generate the dynamic PDFs for you.
- It's framework-agnostic. You can use this library in any front-end framework.
It's still in early development, and I would love to connect with people who have some use cases around it.
I have integrated this library in one of our projects, Formester. You can see the details here https://formester.com/features/pdf-editor/
I have posted this demo video for reference https://www.youtube.com/watch?v=jorWjTOMjfs
Note: Right now it has very limited capabilities like only adding text and image elements. Will be adding more features going forward.
I am using cerebras for book translations and verb extraction and all LLM related tasks. For TTS I am using cartesia. I have played around with Elevenlabs and they have slightly natural sounding TTS but their pricing is too steep for this project. Books would cost a couple of hundred euros to process.
1. Are you targeting startups or enterprises?
2. Do you foresee savings in the range of millions with this approach?
3. What if the ci/cd pipeline takes > x mins? should the laptop be turned on stayed connected to network during this time?
4. In an enterprise, a typical ci/cd pipeline get connected to other dependent services - eg. security pipeline (even 3P) etc. Now, every developer needs to onboard to those services?
When I was in college I really hated searching through all the excel and google docs menus to add trendline, change colors, gridlines, etc (and sadly I didn't have the agency to learn matplotlib or seaborn). I figure others might hate this too, and it would be so cool to have csv + prompt -> exportable svg chart
I'm trying to see if I can "get away with it": no schema migration, no fixed views, one tenant per DB, local-first-friendliness.
The general approach is "Datomic meets XTDB meets redplanetlabs/Rama meets Local First". Conceptually, the lynchpin "WORLD FACTs" table looks like this:
| tx_id | valid_id | tx_t | valid_t | origin_t | entity | attribute | value | assert | namespace | user | role |
|--------+----------+---------+---------+----------+--------+-----------+-------+--------+---------------+------+------|
| uuidv7 | uuidv7 | unix ms | unix ms | uuid7 | adi | problems | sql | 1 | org.evalapply | adi | boss |
2. "Writing for Nerds"A workshop I've been experimenting with, using willing friends as guinea pigs. To help people remove friction from being able to "spool brain to disk". The sales-y part is here, with more context / explanation about what it is about and what it is not about: https://www.evalapply.org/index.html#writing-for-nerds
Examples wiki: https://github.com/scottvr/pbngen/wiki
The code: https://github.com/scottvr/pbngen
Right now it's basically a diagramming app specifically for the domain of problem-solving. I think an issue with it is that it's too hard for new users, so I've spent the last few weeks UX designing a view (figma prototype[3]) that I think is more intuitive to use (though sacrifices some features).
I'm currently working on code design for this view and am hoping to implement in the next few weeks!
[2] https://github.com/amelioro/ameliorate
[3] https://www.figma.com/proto/psTRolY8LTVOef3fkCJ0B4/Simplifie...
It allows to define
add x1, x2, w3, sxth 2
add x2, x3, x4, lsr 8
as
...
add(X1, X2, X3).extend(ExtendMode::SXTH, 2), // yes, it is X3, not W3.
add(X2, X3, X4).shift(ShiftMode::LSR, 8),
...
Still haven't published the repo as I can't pick a cool name...
Also, organising specific topics for each month up to 2026.
Building tools to improve the developer experience especially in regarding to Git and CI/CD. Currently, working on an improved CodeOwners for GitHub. CLI is already completed and open source: https://github.com/CodeInputCorp/cli
Peekly pulls from high-quality sources using LLM + retrieval, then sends you a regular digest with just the most relevant content according to your interests. You can even give it custom prompts to control what it finds and how it summarizes — super useful if you want a particular angle on a topic.
YC folks can use code YC256 for an extra free month (on top of the 14-day trial). Would love to hear what you think!
I have a SteamDeck myself and the game constantly runs at 90fps. The game has full controller support, so it is very comfortable to play on Deck.
If you like PoE, you should feel right at home!
- format BigQuery SQL queries better (in my opinion). Support configurations for: maximum line length, standardize casing for SQL keywords and builtin functions (upper or lowercase). BigQuery UI does support formatting but the output doesn't look as "eye-catching" as I want.
- auto converting between standard SQL syntax and pipe syntax in BigQuery. Most queries work but some are not supported (for now - only case I see not supported as query that involves star expression in a group by since it requires the knowledge the underlying column of the table to work - though I haven't seen anyone writing this kind of group by query yet during my work)
- bring all the nested CTE to the outer of the query. this will be helpful such as BigQuery doesn't allow nested CTEs inside a recursive query. (recursive CTE will be handy if you have a CTE that is referenced multiple times - in such case, you can use recursive CTE to materialize that CTE so it is calculated once)
All this is done with the help of ZetaSQL library. I've done the code but have not yet have time to create a simple UI for it yet :)
It's a simple (currently macOS) application which aims to target shoulder surfing by using a locally running neural network to detect those looking at your screen.
Besides, I have initiated two series on my blog: T4P and GenAI on my blog and writing about Algo trading and GenAI stuff(https://blog.adnansiddiqi.me/)
PS: If anyone has any interesting ideas, then do ping me
Demo uses postgres compiled for WASM so demo runs on an actual postgres db.
Conjtest is a policy-as-code CLI tool which allows you to run tests against common configuration file formats using Clojure. You can write policies using Clojure functions or declarative schemas against many common configuration file formats such as YAML, JSON, HCL, and many others (full list in repo).
Under the hood, it uses Babashka and SCI (Small Clojure Interpreter) to run the policies and Conftest/Go parsers for compatibility with Conftest (https://www.conftest.dev/). It’s also possible to bring your own parser or reporting engine using Babashka scripting.
The initial big pieces are in place now, I’m preparing my end of the year to talk about Conjtest and get some feedback/issues to work on.
The MQTT routing backend is fully automatic. If two nodes are connected to the same MQTT server, or within range of gateways that are, they communicate.
The web client communicates directly with MQTT, meaning you can chat and set registers on devices without having hardware.
A bit of background: I have been working on a Raga classifier since November of last year - I started with just 2 ragas and a couple megabytes of audio. After experimenting with a lot of different ideas and Neural Net Architectures, I finally landed on one that could scale. I increases to 4 ragas, then 12, then 25 and then to 65.
All the training is done locally on my desktop (RTX4080, AMD 7950X, 64G RAM). My goal is to make an app for fast inferencing (preferably CPU) and to get this app in the hands of enthusiasts so that I can get some real data on its efficacy. If that goal is hit, then my plan is to iterate and keep increasing the raga count on the model and eventually release to the public. As long as I can get the model to either run locally or for very cheap on server, I hope to not charge for this.
It has been an amazing learning experience. The first time I got a carnatic singer to sing and the model nailed almost all ragas was the highest high I've felt in a while.
After some 12+ years of collecting microservices platform ideas in my head and implementing them at various companies and open-source projects, I decided to create a system that contains all of them.
1Backend is the result. Mostly built it for myself so I have a foundation to build projects on but I'd love if others would also use it!
More specifically, I have worked on the demo https://github.com/AitoDotAI/aito-demo to make use cases visual and well described. E.g. smart search use case is here https://github.com/AitoDotAI/aito-demo/blob/main/docs/use-ca...
Claude Code is doing absolute wonders on setting things up. One has to just check out for hallucinations and made-up stuff in any written content.
I wrote the articles/exercises/projects a few years ago, but now I've made interactive coding and quiz widgets, using Pyodide, Lit, web workers, etc. All open source: https://github.com/pamelafox/proficient-python
A few weeks away from launching the MVP.
Also working on an email communication assistant https://merel.ai creates draft responses for gmail and outlook based on your company data, email history, website content and extensive organisation settings. Still work in progress as well.
Meshtastic is fun!
I built it to help save time for folks building internal enterprise apps
Gonna focus on marketing and improving the app.
USBSID-Pico is a RPi Pico (RP2040/W RP2350/W) based board for interfacing one or two MOS SID chips and/or hardware SID emulators over (WEB)USB with your computer, phone, ASID supporting player or USB midi controller.
More info at https://github.com/LouDnl/USBSID-Pico
Well done. This is really cool.
If you're curious, you can see it here (needs WebGL2 + Wasm):
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
GitHub: https://github.com/safedep/vet
- Each plugins run in its own WASM vm.
- Explicit network/fs access. No network or file access by default
- Can limit cpu/resources
The reo: https://github.com/tuananh/hyper-mcp
[1] - https://rikverse2020.rikweb.org.uk/
[2] - https://rikverse2020.rikweb.org.uk/poem/economic-migration/
Made me smile - great line.
[0] https://store.steampowered.com/app/3627290/Botnet_of_Ares/
What language have you been using for the game logic? Straight up GDscript or are you using a different language binding?
I do plan on open sourcing more of the code over time. I also have started working on other sites using the same algorithm implementation (music, movies, video games)
This has just been a side project over the year generating passive income. I get around 250,000 page views a day, and with ads, memberships, and affiliate links I make around $2,500~ a month.
Tech stack is ruby on rails 8, postgresql 17, opensearch, redis, bootstrap 5.3 hosting on 3 servers on linode.
A couple questions:
* Is this primarily intended for discovering new reads, or for people who've already read the books to debate which is greatest? I found the book descriptions sometimes give away too much, to the point where I stopped reading them for any book I might be interested in reading for pleasure. Examples include The Great Gatsby and Madame Bovary. Perhaps you could have a concise description that stays far away from plot points, and a more expanded description behind a "more" link.
* What dictates whether a series has one place on the list or separate places? Narnia has one for the whole series but Harry Potter has individual listings per book.
* Are ratings and reviews from your own site taken into account in the rankings?
- Series have always been a problem. Some book lists will include the entire series, and then some will have individual books. If the series is sold as a single book I'll often just include that. Like Lord of the Rings. Sometimes I will include only the first book in the series on a list, to prevent always adding every single book in a series when a list mentions "harry potter series".
basically I don't have a perfect way of handling series'
for the last point, kind of. If you add a book to the default "My Favorite Books" user list, it gets aggregated and used for this book list which is included in the rankings. https://thegreatestbooks.org/lists/463
We built this together at a previous organisation and moved all the internal and external services at that organisation to this system (It allowed the org to satisfy the ISO27001 requirements).
After being in operation for a couple of years, we have collected a lot of insights and feedback on what to change/improve for the open source version.
This summer I’m setting aside some time to work on making those changes for an open source version of what we call “Vanir”.
(Seems like good timing with the initiatives in EU to take back some ownership of the cloud stack).
No LLM or AI magic. Just simple state machines, extendable configuration, and a lovely GUI (web-based, no JavaScript).
The tech stack is python3, postgresql, ansible, and django.
It's built using Nuxt because I've never really played with Vue before and it seemingly comes with all I need for a static, markdown-powered blog. I guess what's been stopping me was me bothering too much about "When is it good enough to be online?" and "What should the first post be?". But I'm trying to get rid of the perfectionism by just putting it out there and just posting something. I think I'll reflect on this in the first post.
Obviously the main thing is getting the listings data, which as far as I know (mostly) isn't readily available any other way that scraping the cinemas' websites, for which I set this up as a separate-ish project[1]
Hope you didn't start on it!
(By the way, it wasn't too easy to scrape in the end…)
More PRs very welcome if you're in the mood!
I'll add the Peckhamplex now.
Time Out should have long done that but instead they stopped their print edition.
I'm trying to make i18n easier, integrate it better with CI/CD and automate it more with LLMs (for now in Go, second priority is TypeScript and other languages later).
For this I had to develop a completely new approach and subsequently a specification for the "textual internationalization key" (TIK) which are programmatically translatable to ICU MF.
Toki is the first TIK processor implementation for Go.
I'm currently close to the public release. After that, I want to learn some ML techniques to predict Pieter Levels' Hoodmaps classifications from my publically sourced data. It would be cool to have accurate automatic predictions of the places-to-be for every city.
Wanted workflow orchestration without infrastructure to store workflow JSON/YAML in database/S3/CDN/whatever and execute it on Cloudflare Workers, in the browser, etc.
The magical part about the serverless workflow spec: native JSONSchema support for inputs/outputs at both workflow and task level. This creates composable, Lego-like tools for AI agents - each tool is just a workflow reference that can be fetched on the fly.
Working on final cleanup before publishing.
- Reimagined Feed: Ditched the traditional noise—see the prompt and model behind each creation, get inspired, and check out top curated models’ performance. No more digging to figure out how the magic happens.
- Template Remixing: Creators can drop reusable templates for others to remix and build on, speeding up that creative flow.
- Curated Models: Handpicked the best for images, videos, audio, and text—think Costco quality, no endless searching or tweaking needed.
- Infinite Canvas: Reworked the workflow with an upcoming infinite workspace where creators can prompt, drag, and drop to mash up content across media.
- Built for Non-tech Creatives: Driven by our AGI-for-The-Rest-of-Us mission, it’s tailored for non-technical creators to turn imagination into reality.
- Flexible Pricing: No wasted credits—top up for up to 30% extra, never expire, plus member discounts on curated top-tier voice, image, video, and text models.
Happy to chat if you’re also into vibe coding, building consumer AI.
a Slack and Discord app to help take turns (i.e. queue) with your teammates, overwhelmingly used for sharing tech resources like staging servers. It's crazy something that started so tiny (almost as a joke for my old workplace) has grown into my main "thing".
an infrastructure configuration monitoring solution for terraform/opentofu managed stacks. I am unsure how to proceed with this tbh. It's sort of the underdog in this space - it's much cheaper than the competitors. But really, it's yet to make a dent.
(I am maybe prowling around for something new to build)
The main use-cases I'm thinking of right now is triggering agents using email or a very simple document upload flow to any SaaS (just forward an email to the SaaS).
This week, I'll set up a Hugo blog with the Ed theme, love it, looks exactly what I'm looking for, and as a former LaTeX enthusiast, it's pretty close. It's readable, minimalist. I'll need to customize the theme, though. I plan to publish blog posts about anything I find interesting.
https://gohugo-theme-ed.netlify.app/
In parallel to this work, I'm setting up a simple system to keep my website + subdomains easy to build, rebuild, and deploy with Caddy on a cheap Scaleway compute server. In the past, I had some ideas I wanted to publish, but the system I went with made managing the sites dreadful.
Once that's ready, I'm back to learning Rust and crypto. It's fun, interesting, challenging, remote-friendly, and the salaries are usually 30-50% better. My current tech stack feels like a dead end: it has a low ceiling in terms of salary, the projects are generally not very interesting (I'm grateful for my current project, it's the best there is with this technology), and I believe the technology will see a slow and steady decline.
Apart from work, I'm building the playground for my 2 yo son, and planting blueberries, he loves them.
I don't see many opportunities that pay well, are interesting, and available for remote. I'm happy at my current position, but if they were to ever "right-size" the team, I'd be fckd, so I spend my nights learning other stuff.
I started Flutter in 2018, back then it felt "magical" for mobile development, now all the competitors caught up. They also (IMO) waste their time reimplementing Flash on the web, it's horrible for 99% of the cases. The community is also off-putting, you observe obvious flaws, 10 GDEs come at you that you are a POS.
In general, mobile has a lower ceiling than backend, frontend, systems, etc... Mobile is also usually a lower priority for the business than web.
Perhaps a first blog entry would be to show and tell how you setup the blog with Hugo+Ed on your domain in the first place.
As someone who is being told that they need to increase their non anonymous footprint online, I certainly would be interested in reading it.
Long story short: Sign up for Scaleway, get your account approved, launch an instance, they have affordable "learning" instances that still feel "real" and can later run real services that need backend. I don't expect lot of traffic and I don't care if my stuff would go down from time to time, it's for fun. Set up SSH. Buy a domain, set up the DNS records to point to your instance. Run Caddy on the server to serve a dummy HTML file. Set up HTTPS. Verify you see your stuff in the browser. Now, create an actual site. Install hugo, pick a theme, install locally, build locally. Set up a script that copies the build folder onto your server where Caddy is serving, then restart Caddy. Write some content, check the limits of the theme / your set up, make sure everything works correctly. Even with the best of themes, you'll want to fix or change something, do that, if it looks good and you still have energy to work on your blog, start writing posts and let the world know.
Now I figured out I want to go all in actually learning rust and doing the deep dive in crypto. Enjoy the trip.
As I have a web+mobile background, I'll probably start with some simple mobile or web apps, a wallet, price alerts, seed phrase gen, ens explorer, etc, basically anything that's crypto / defi / blockchain adjacent to understand the field better and ease into it.
Then, I'll also build stuff from the ground up (build your own blockchain, smart contracts, etc) so that I have a deeper understanding of the basics, not just "hand-wavy" ideas like "freedom, sovereignty, decentralized, store of value, trustless, permissionless", etc.
In parallel, I also plan to do non-crypto stuff to practice Rust and to have an escape route to web Rust in case I don't like crypto all that much or can't get a job right away due to lack of Rust + crypto experience..
Then, I hope, as I have a better understanding of the field, I'll have more interesting project ideas, too.
We're building a chat app that automatically creates and manages your to-do list right from your conversations.
I started this for a simple reason: I was tired of the soul-crushing 'copy-paste' work of moving decisions from Slack over to Notion or Jira. So much context gets lost in that process, and it just creates more "work about work."
Our core idea is simple: a chat message and a to-do item shouldn't be two separate things you have to keep in sync. In Markhub, the conversation is the task. It's not a copy; the conversation itself becomes the to-do, and all the context is automatically preserved.
Our bigger vision is to do for collaboration what GitHub did for Git. We’re not reinventing chat or kanban boards; we’re building the seamless 'workflow layer' on top that finally makes them work together.
We're currently in a private beta and would love to hear from HN users who feel this same pain. We’ve been fortunate to get early traction with large enterprise clients (including a ~$200k on-premise deal), but now we're looking for feedback from smaller, agile teams.
Any and all feedback is a gift. Thanks!
- A home-rolled router/firewall: Using yocto to create a distribution for a router/firewall for my home network. It started as an exercise in wanting to have more control over the security of my home network, as well as see how nice of a UI/UX I can tease out of an LLM. It's also part of a (seemingly never ending) consolidation of homelab services.
- A SNES Reverse Engineering setup: A nephew of mine is starting to get into video games and is starting with a SNES but his system broke. I'm working on helping repair the console, but am also trying to set up an effective "LLM + Ghidra + SNES emulator + image generation AI + asperite plugin" to allow him to swap sprites and text in games to add some creativity and learning to the experience.
- A personal assistant system: Experimenting with agents to create a personal assistant for our house, and seeing to what extent the agents can be helpful and how much hardware is required to run something like that in-house.
- aztui: A TUI for exploring and interacting with Azure resources. I'd like to add some caching/pre-fetching logic to make the interaction with the interface snappier (one of the main motivators to create it).
I've been using GPT pretty heavily throughout, and it has been a lot of fun both using it, and spending some dedicated time looking at the models themselves along with the frameworks that support running and integrating them.
As a data engineer, I regularly have to dig through massive files to debug issues or validate assumptions — things like missing column values, abnormally large timestamps, inconsistent types, or duplicate records. It’s tedious and time-consuming, and that’s what led me to build this.
ZenQuery makes it quick and easy to explore data locally, without needing to spin up notebooks, write scripts, or upload anything to the cloud. It’s also useful for doing lightweight analytical QA if you're working with business data.
Happy to answer any questions.
Will update.. thanks again..
Will think on implementing this correctly since this will also need SSO integration for auth along with auditing and rbac controls.
Thanks for the suggestion :)
I’m the founder :) Happy to help!
But, can you please add gdrive connection support to it? Our company mainly uses gdrive for all collaboration and would help to have a direct integration with it. As of now I first have to download the files (they are small files, but still).
Great product otherwise. Best wishes..
Regarding gdrive integration, it's already in my todo. Have received the same request from one other person.
Will bump up this feature's priority.
Thanks..
Nothing to show yet, still in development, I hope I can share a github link in one or two months.
AppStore: https://apps.shopify.com/bundlejoy
This is starting to overlap with building a tool server for personal AI agents.
It helps you track, understand, and plan your personal finances — with a proper accounting foundation.
It's interesting in many way. Using double-entry (it's a perspective shift), the technical challenges of building local-first app, UI/UX & visualizations, privacy and more.
> A: Your financial data is stored locally on your device. ...
Good stuff! This was the first thing I checked, and it means I am now reading more about the app. Really nice to see this approach.
I know this is still WIP, but is feedback ok? The plan buttons say "Get starterd" which is a funny typo :) Also, I was not sure, but is this a website app, or a local app? For local data, I would strongly prefer an actual local app. Some screenshots of how it looks on multiple devices (directly comparable, as in, this is the same view and same data on iOS/Mac) would be great. Finally, do you have bank links? _The_ killer app I want in a personal finance app, and you'd be surprised how many make this really difficult, is to track my actual income and spending.
I signed up for your newsletter. Rare for me to do. Looking forward to hearing more!
With some friends, I've been building "Mo Money": a Duolingo-style app to teach investing through a gamified mix of microlearning and a real-time trading simulator (but it's a game). It's meant to be built as much for total beginners as it is for amateurs.
Sim side: A fully playable trading game backed by historical market data (no real $$), it's now integrated with a FastAPI backend, WebSockets, Firebase (soon), and XP system to track skill growth and gamify progress.
Learning side: A clean microlearning stack that teaches financial literacy in snack-sized bits, a lot like Duolingo, it's interactive, level-based & accurate and relevant information in the most digestible format.
Just added: a lightweight AI tutor for contextual Q&A during lessons. Thinking of adding a little more AI than a chatbot potentially to help learners in the app.
Upcoming: XP-linked achievements, a leaderboard, and a light paywall via Buy Me a Coffee.
We're undergrads building this from scratch, aiming for early users soon. If you’ve ever thought “I wish someone made markets feel learnable,” we’re trying to do just that. super excited
Created a game to learn navigational marks in the Solent https://guess-the-mark.verdient.co.uk/
Putting together the landing page for my software business https://verdient.co.uk/
I’m also putting together an analysis of warhammer 40k games and applying operational research techniques to it.
Tangential: do you have insights into viability of mini automated anti-drone turrets? Something you'd place on a truck or pull out of a trench when needed? We already have drones with shotguns. I guess it's the automatic acquisition and targeting that's the difficult part, but just how difficult is that?
Is the equipment efficiency meant to capture e.g. using a $1M missile to shoot down a $1k uav/rocket
I would consider adding a tutorial or a toy version that's simplified a bit.
Many alternatives (like Doodle) are full of ads, which makes their products unusable. My goal is to try and make the internet a little better place by offering a free version without ads.
Currently rethinking what a scheduling platform should look like in 2025, perhaps with AI integration to ease the planning process.
Going from Manifest to OCI is a bit tricky and performance for calculating total storage based on metadata is hard to get right. But the result is that we own our full registry implementation and can take it any direction we want. Quite happy with that!
Appio lets you add mobile widgets and native push notifications to your web app within minutes—without building or maintaining mobile apps, hiring developers, or dealing with app stores. You can try it at: https://demo.appio.so/
If you’re building a web-based product without a mobile app, or just want to try Appio, I’d love to chat! You can reach me directly via https://my.appio.so/ or drop a comment here.
First we built it as a tool to fix any bug. After talking to a few folks, we realized that it is too broad. From my own personal experience, we realized how messy it is within organizations to address accessibility issues. Everybody scrambles around last minute. No body loves the work - developers, PMs, TPMs etc. And often external contractors or auditors are involved.
Now with Workback, we are hoping to solve the issues using the agentic loop.
If you personally experienced this problem, would love to chat and learn from your experience.
That animated demo (in the 'see it in action' section) looks really impressive. And what are you using to draw the diagrams?
MCP-as-a-Service sits between N8N and the Google Chrome browser, providing a Playwright MCP instance "in the cloud".
The standard fiscal POS app was adapted to support a sort of low-trust swarm of waiters who used the app to collect orders. These orders were then transferred to a few high-trust cashiers by scanning QR codes generated on the waiters' apps.
After receiving payments, the cashiers' apps printed invoices and multiple "order tickets" categorized by "food," "drinks", "sweets"... This allowed waiters to retrieve items and deliver them to customers.
The system was used by around 40 users, with new waiters joining or leaving throughout the event. They used their own phones, and the app functioned without internet or Wi-Fi, gracefully downgraded (If a waiter didn't use the app due by choice or due to technical problems, they could manually relay orders to cashiers), Customers also had the option to approach cashiers directly, receive their order tickets, and pick up items themselves.
This is not that technically interesting, but I liked how the old manual system, the 70+ year village firefighting org. main cashier had, got digitalized in non-centralized way. (and I took this chance in trying to explain it, as I will have to, to maybe find more users for it)
Just curious: How did it work without internet or wifi? Did it do something over bluetooth, NFC, QR code...?
Then the waiter paid the cashier (in advance), got the bill to give to customer and order tickers (printed on a bluetooth POS printer with a cutter, so they were already separated) to recieve the goods (grouped by stations that gave out the goods, food, drinks ...). The stations took the order tickets and gave them goods. The waiters delivered them to customers and used the bill to get cash from the customer.
The waiters could use their own starting money and just stop selling at any point, or got it from the main cashier and had to return the same amount at the end.
Would love to chat with people looking to combine n8n-ish capabilities in their code!
Struggling to get the generated iterations to be up to a standard I'm happy with at the moment, but improving every day!
* of its ability to store unit system data as code
* unit conversion is an iterative deepening depth first search
* manipulating symbolic arithmetic is so easy
Unfortunately, it requires users to compile swi-prolog for source because the library is using some unreleased features. If anyone would like to test and report some feedback, I would be truly grateful !
The key goal is that the creators of 3DGS models can use Blurry as a powerful tool to build the 3D experience that is performant, simple, and aesthetically pleasant for end users (viewers).
3DGS models can be shared via a link or embedded on a website, notion, etc..
Link: https://useblurry.com
It's a web-based notepad calculator, which means it's a notes app but it can evaluate inline calculations like
``` £300 in USD + 20%
09:00 to 18:30 - 45 minutes ```
I wrote the core of the calculator a few years ago, and I've just launched a big rewrite that supports
* document syncing * offline editing * markdown formatting * PDF and HTML exports * autocomplete * vim mode
Happy to hear feedback :)
It doesn't require an LLM or api keys to run so you can install and go. Hope it helps somebody:
npm install -g url-to-markdown-cli-tool
repo: https://github.com/mmdclx/url-to-markdown-cli-toolI’m trying to build a consolidated database of PFAS free products that make it easier for shoppers to find safe foods, cleaners, clothes, and other products families commonly use. The database shows not only the product, but the reason it’s considered pfas free; sometimes all you have to go on is the brands word, sometimes there is third party testing for pfas, sometimes there is a material issue justifying it. We tried to present it all for the consumer to easily decide. Users can search, or browse for products using categories.
The database is here: https://database.pfasfreelife.com/
For instance the first thing I went into was bedding but there currently isnt a product listed. And while I dont have a suggestion, it would be cool if another user did.
My son has inherited my love of cooking and baking, so we'll refine the book, add comments and photos, and eventually print and bind copies for our family and friends.
I also am hoping to laser engrave some old cookie sheets with one of my grandma's hand-written recipes. The problem I have is that it's rather faded, and I don't know yet how to make it pop for a good contrast.
Im working on simplyfing the code further. I tried really all of the "productivity" stuff to stay organised. Got angry multiple times, went to pen and paper, was OK, but i felt i just need a slight glimps of tech to make it more functional. Something little more than plaintext file, but not much.
https://github.com/ClassicOldSong/refui-hackernews-demo
It started as a demo only but it looks slick so I added standalone PWA to it to be installable as a desktop app. Now browsing HN feels even better!
I've had some breakthroughs with LLM translation, and I can now translate (slowly, unfortunately) at a far far higher quality than Opus, and well above DeepL. So I'm considering offering that as an API, though I don't know how much people actually care about translation quality.
DeepL's customers clearly don't care - their website is all about enterprise features, and they appear to get plenty of business despite their core product being mediocre.
Would people here be interested in that?
- https://github.com/rumca-js/Internet-Places-Database - Database of Internet domains, links
- https://github.com/rumca-js/Django-link-archive - RSS client, web crawler
- https://github.com/rumca-js/crawler-buddy - web scraper, web crawler, with JSON interface
A project is like a pet. You cannot just "stop" caring about it. If it lives, then you have to look after it
Project Website: https://gemlink.app/ Companion extension: https://chromewebstore.google.com/detail/snapreader/pickciba...
We're headed into an era of massive white-collar reskilling.
How you think > What you know.
Critical Thinking skills will be the most important skills as we AI expands throughout the economy and we're surrounding by LLMs that are highly fluent
Socratify is a Critical Thinking Coach that sharpens How You Think and Speak by Debating AI
It proposes interesting questions (currently business related) that you debate in 2 min conversation and get feedback on how you think and speak
Right now its most helpful if you're interviewing for a job or aiming for a promotion in a business related profession
Rebranding as https://cronjobs.run since ill allow more than just javascript next week!
News Perspective Gap Compare how major events are reported by local vs international outlets (e.g. Taiwan election coverage in Taipei Times vs BBC)
Price Transparency Settle debates like "Are Xiaohongshu prices real?" by checking identical products on U.S/Walmart and China/JD.com simultaneously
Authentic Connections Join discussions on 2channel (Japan) or Reddit (Brazil) without VPN, preserving original language/cultural context
Tech approach: Country-specific keyword routing (like valentin.app for search) Lightweight proxies to bypass geo-blocks (no data storage) Crowdsourced local portal directory Would love feedback from globetrotting hackers!
We're also working on the Premed Super App, same thing for people taking medical school entry exams like the MCAT or MDCAT.
I get to work with a bunch of top notch students and doctors, and I myself am the first ever full-stack technologist who also is a doctor in Pakistan, a country of 250 million people.
Still working on the realtime, memory, and game playing part. If anyone is interested, feel free to join and build.
Some examples:
- A minimal C shell with built-ins like cd, pwd, type: https://gist.github.com/rrampage/5046b60ca2d040bcffb49ee38e8...
- Terminal Snake game which fits in a QR code using Linux syscalls for drawing: https://gist.github.com/rrampage/2a781662645dc2fcba45784eb58...
- HTTP server with sendfile support in ARM64 assembly: https://gist.github.com/rrampage/d31e75647a77badb3586ebae1e4...
I learned to handcraft a static ELF binary using just GNU assembler (no linker): https://gist.github.com/rrampage/74586d0a0a451f43b546b169d46... . Trying to see if I can craft a small assembler in ARM64
http.S is something I wanted to do by myself, ended up with generating data in asm and reusing Go for a http server.
Built in Rust(tauri), GoLang TypeScript and Livekit as WebRTC infra
My Misterio docker based tool is searching new feature request... https://github.com/daitangio/misterio/
Also, I am playing a bit with Zulip Chat, which I find quite well done and easy to self-host, considering its complexity: https://github.com/zulip/docker-zulip
Last but not least, I suggest a new Murderbook novel... https://amzn.to/3TMJdlh because there is not only coding!
Thinking about:
How will various Human Computer Interaction change as many of current apps (which are screen based UI with some background code) simply get replaced with chat/voice/gesture based requests to LLM
I must say it has been more challenging than what I though it would be, specially if you are looking to put it onto production. I'm doing it for fun though.
Nothing published yet, I'm not sure if it will ever be. What do you think ?
Finding work opportunities which enable me to grow in my personal interests (material science, physical chemistry, applied physics, additive manufacturing). I feel compelled to support scientific research, such as material informatics - or maybe automating labs with robotics.
It would be thrilling to manage infrastructure for scientific computing workloads (as an idea).
Honestly, the tech stack + role matters far less than to me than what we're doing, and who I'm doing it with. I'm motivated by curiosity and the desire to learn. I am tired of reading scientific literature solo with nowhere to apply the knowledge, except personal engineering projects in my workshop by myself. I am not an academic, just a full-stack/polyglot software engineer. In the last 3 years my interest in this stuff just exploded.
Additive manufacturing is another field that I'm very interested in, and would love to work in.
10 days ago I ended a 2.5 year relationship which was not healthy for me to be in. I'm recovering physically from chronic stress. Everyday is getting better.
Planning to move to another country soon. Currently I'm in a non-EU country in Europe. I would be very interested to live in Germany, but am open to other possibilities - the work opportunity would be the driving factor. I'm a US citizen, 38 years old. There are (seemingly) no communities and resources for me to grow my personal interests here. The country is falling apart due to an extremely corrupt government.
Technically?
Been using agentic coding tools the past 7 months.
Recently, I built a slicer for a clay paste extrusion 3d printer (which I built for fun). It aims to generate a continuous toolpath (including between layers) that does not self-intersect, from an STL model. Stress tested it with complex geometries.
The slicer involved a lot of computational geometry solutions. Despite my graphics programming experience, it would not have been possible for me to build this without the help of LLMs, and more crucially - access to publications from research groups dedicated to this space. Reading literature about toolpath planning for industrial robotics was thrilling.
I learned a lot about how to do R+D in a somewhat unfamiliar domain. Feels like I pushed the limits of what's possible with LLM coding on this project, after trying many workflows, models, and techniques. Will be posting some insights on my soon-to-be-released blog soon.
It was a whole lot of fun implementing a zillion approaches to solve this problem, and seeing the results quickly visualized with matplotlib. If anyone's interested, I can share some images here.
I'm also building a PKM (personal knowledge management) system based on a graph db that helps me keep up with all the research + projects + daily activities in my digital life. I've been an org-mode + emacs user for 8 years, and it's just become increasingly obvious to me that I need something more powerful. Trying to coerce a relational db to support multiple inheritance w/labeled nodes + edges, while getting normalized tables is not pretty, and all the join tables will cause performance hits. This project is the definition of scope creep, but it's a personal project, so I'm okay with it.
Documentation and logs of many past/current projects are going up on my soon-to-be released blog soon. I've written many draft posts, and it's already deployed. By next month I expect to be able to show everything that I've mentioned here. There are some repositories if anyone is interested in previews of anything mentioned.
Socially?
I'm trying to be more open and transparent online - to reach out and find people who I could talk with about shared interests, and potentially build something together.
Hacker News has been a consistent source of inspiration in my life since I discovered it in 2012, and I want to start contributing to the discussions and inspiring people in any way that I can. I've been a lurker for far too long.
Can send links to my github and unfinished blog (with drafts for some past projects and more about future projects + topics of interest), if anyone reaches out. Next month I intend to be posting many links, and to use my "real" username. Just don't want LLMs scraping the personal bits of this post.
Stacktape is a PaaS that deploy to user's own AWS account.
v3 adds many new features, but namely the ability to generate IaC config directly from code, by analyzing the user's repository (both deterministically and using multiple AI techniques).
For example, if it assumes your application is a Web API that uses Postgres and Redis, it will create a Stacktape IaC config that deploys Fargate container, load balancer, Aurora Serverless v2 Postgres and Elasticache Redis (behind the scenes it will also configure things like networking, VPC, security groups, IAM, etc.)
Launching this weekend.
The big trick or the language is that it doesn't hide the pipelining you have to do to up your FMax, instead, you can manually add register stages in the places they're important, and the compiler will synchronize the other paths.
A really neat trick with this pipelining system is that submodules can respond to the amount of pipelining around them (through inferring template parameters). This way the programmer really doesn't have to think about the pipelining they do add. Examples are a FIFO's almost_full treshold, inferring how many simultaneous state there needs to be for a pipelined loop, inferring the depth of BRAM shift regs, etc.
And I’m looking to productise a bookmarking app “Tsundoku” I built for myself and have been using for a year https://bsky.app/profile/gingerbeardman.com/post/3ls2ymul33s...
Recently reworked said deserialization to Go structs, allowing to handle more data layouts while simplifying the syntax. And having a great co-op with one of the two active users via Github issues.
More to come (functions, cross-reference of data blocks, for example).
[2] https://lucassifoni.info/blog/leveraging-hot-code-loading-fo...
Just got this POC up and running the other day. Realistic sample data for prototyping and testing is frequently a pain point. Even more so for anything having to do with email.
So I wanted something that would pretend to be someone and send and respond to fake emails. And it seems like local LLMs are more than capable of this nowadays. Uses Ollama. Vibe-coded with Claude. UX designer here so be gentle.
Just made public the first 10% of the functionality. Built with Observable Framework
Could use "tee" to limit the reading to just one instance but I would like to try Python.
Hoping to write the core of it as an open-source hobby project to learn Python multithreading and then extend it for the actual problem I need to solve at work through the use of config files.
Wanted to try out vibe coding, to see how far it could take me.. pretty far it seems.. Just a small web component to display charts, supports line and bar chart for now.
My next item is to add AbuseIPDB IP addresses to my "Uninvited Activity"[1] IP address blocking system, implementing xRuffKez's script here: https://github.com/xRuffKez/AbuseIPDB-to-Blackhole
Unfortunately, but also understandably, AbuseIPDB limit their free-access (account required) API to 10,000 IP address records. So I might be putting it into a database to hopefully aggregate multiples of the 10k results if they're not always the same 10k.
I got the demo video produced, and a blog set up and seeded. You can see some of the science behind learning multiple languages at https://phrasing.app/blog/multiple-languages or follow my progress using Phrasing to learn 18+ languages at https://phrasing.app/blog/language-log-000
Now I’m working on the onboarding process, which I’m very excited about on both a product and a technical level. On the product level, it dovetails nicely into most of the shortcomings of the app. One solution to a dozen problems.
On the technical level, I’m starting to migrate away from reagent (ClojureScript react wrapper). The first step was adapting preact/signals-react to support r/atom, r/cursor, and r/reaction. This has worked beautifully so far and the whole module, with helpers, is less than 100 LoC. I’m irrationally excited about it, and every time I use any method it brings me a stupid amount of joy… especially since it’s exactly the same API as reagent.
For those curious, the next steps in the migration will be: upgrading to React 19 support once reagent ships with it (in alpha currently), then replacing the leaf components with hsx and working my way up the tree. No real code changes, just a lot of testing needed. Maybe at the end of it all, I can switch the whole app over to preact — will be interesting to test the performance differences.
As far as ideas I’m thinking about, I’m currently planning the next task in my head. This will be an (internal) clojure library that will hopefully have ClojErl (erlang), ClojureScript (js), and jank (C) interfaces, which means I’ll be able to write clojure once, and run on the server, browser, and mobile — all in their native environment. Needless to say, being able to write isomorphic clojure without running JavaScript everywhere has me almost as excited as my signals wrapper :D
Thinking about building an arena like product discovery platform to help people finding the perfect app for them… like a bookmarking app…
Our goal is to make DevOps easier. We want to provide simple (yet scalable) solutions on AWS, Azure, GCP.
You pass your own credentials and we deploy the infra into your tenant.
Using Narwhals under the hood has been a blast and amazingly effective!
Shifted some stuff around recently, and trying to get a guaranteed stable api so that I can bump to v1.
[1] https://teem.sourceforge.net/ but these docs are super outdated
An event based investment tracking app that is designed to help you keep track of important events around your investments.
A unified API for online advertising. Think Plaid for ads platforms instead of banks.
We help e-commerce sellers understand what their customers really think by analyzing feedback from various sales channels—what they like, dislike, and why
These insights can be used to improve the product, optimize listings, and refine marketing strategies
My 5th gap year (unemployed)
- Supports markdown every where, even in your comments and replies.
- Get notified.
- Personalized feeds.
- Lightning fast & mobile first.
I used to have an integration in Spotify, that automatically copied my "Discover weekly" playlist into an archive. Over time, it grew close to 10000 songs. It also started to get polluted by ambient sound and kids songs when my daughter was born.
I wanted to clean it up but as far as I could tell, the only way was to do it manually, song by song. I'd want to have something more powerful, that would easily let me rearrange/split/curate my playlists based on any arbitrary constraint.
It's written in elixir using Phoenix live views. There's almost no custom Javascript outside of what that framework gives. First load may take a while because it's the cheapest tier of fly.io and boot loads all known ingredients and products in to memory.
Please check it out if it sounds at all interesting! Keen for feedback :) I've written some docs, including a "getting started" guide, linked in the GitHub page.
We've recently released a new archive format called ptar, it can be found on HN if interested :-)
What features are planned for the free version and which ones will need to be payed for?
Long story short: we provide multi-source/multi-destination/multi-storage (ie: backup S3 to disk, restore to SFTP), we have a nice UI, we reimplemented our own database over CAS allowing us to have a virtual filesystem + a ton of nice features on top of the snapshots, + an archive format of our own and other nice features.
All of this is in the free version, what's going to be paid is plugins to backup commercial services, enterprise features like multi-user support, ACLs, or compliance related features (ie: GDPR / sensitive data detection, ...), backup orchestration over a pool of machines, and more.
I've gotten the process to fully catalog all of the advertisements in a magazine (about 150 on average) down from over a week to a few hours. I should be able to get through the material within my lifetime now :)
I feel the same about a lot graffiti; if it's recent, it's an eyesore, but old graffiti can be extremely interesting. I guess both domains expose some elements of the zeitgeist seldom explored in other mediums. ¯\_(ツ)_/¯
Nice site, by the way!
Yeah, there is a subtext to the advertising that changes over time that is very interesting. For example, early appliance ads are about saving household labor to spend time with the kids, later appliance become more about status and the allure of technology.
Think about a newspaper / magazine: The ads didn't suddenly block the article, move the page around, or phone home to the advertiser. Likewise, the ads wouldn't slow the magazine down, flash, or make noise.
Ok, ok, I'm out.
> Semantic Kernel is a lightweight, open-source development kit that lets you easily build AI agents and integrate the latest AI models. It serves as an efficient middleware that enables rapid delivery of enterprise-grade solutions.
When I worked at larger orgs. Reviewing applicants was a very busy task. I would usually get 100-300 applications for the role. And I never trusted HR team to filter out candidates before interviews, so I would go manually through all the candidates. In the world of AI and automatic ATS systems, I have the same problem. I don't trust AI now to filter and rank candidate resumes for me. I wanted something that enhances my process, but does not replace it.
So i've started working on https://qrew.cc, where AI helps you, but keeps you fully in control.
So I'm building it. Still early, and I have nothing to share yet, but I'm already pretty confident my Geoguessr friends will love it when it's finished.
Aside from that, I’ve also made some sillier little games/demos[3].
There’s a computer with a classic BASIC interpreter written in Lua after the first level.
Trying to document and map as much of the publicly accessible stained glass as possible. The goal being the next time you visit a new city or town, you'll know where all the beautiful stained glass is to go see. Just recently added support for countries outside of North America. No exciting tech (vanilla HTML/CSS/JS). But excited for folks to check it out!
Crystal is a re-imagining of what an IDE means when AI drives development. Traditional IDEs are designed for deep focus on one task at a time, but that falls apart when you have to wait 10-20 minutes for an agentic task to run. Crystal lets you manage multiple instances of Claude Code so you can inspect/test the results of one while waiting for the others to finish.
head -c 512 file.bin | file -
I'm building a web app for exploring my training history, so my trainer (who's virtual) can explore my data the same way I can.
Eventually, I'd like to start training an AI to build programming for me based on my history.
I'm working on an AI thumbnail maker. You just upload a picture and pick a design type and it generates a thumbnail for you. It's still v1 and would appreciate feed back.
If anyone's interested in that kind of thing:
https://massiveimpassivity.substack.com/p/softcore-how-nobod...
Generating an estimated $130 million per day (100 Megatokens/second) worth of GPT4 tokens at home will have to wait (plus I'd need to upgrade the power and AC in this room a bit to handle the estimated 750 Kw of power it would take)
Repo: https://github.com/specfy/getstack
I'm building this website to track technology trends and usage across the most popular GitHub repositories. I parse 35K repos every week and find tech inside, aggregate it in Clickhouse, and show a summary on the website.
It was a good opportunity for me to finally learn more about Clickhouse, also trying to fully self-host on a VPS, which has its own challenges, especially regarding hosting frontend with SSR
I figure the solution is to pay people for their location data, and be up front and transparent about collecting it.
https://turquoisehexagon.co.uk/remindersync/
The latest version supports dataview tasks format and multiple reminder lists with Routing Rules.
I think the product is pretty much feature complete now so I’ll probably start doing some marketing and move onto coding something new. Sales until now have all been organic.
I wanted a simpler alternative to the self-hosted SerpBear tool that I could use and share, so this is the result.
It uses SerpApi (where I work) as the data source for what actually executes the SERP scraping because it's much too complex to have purely client-side, but 100% of the rank tracking portion is client-side.
It's not fully complete and there's definitely rough edges with it, but because of the data source, it supports a large number of search engines right off the bat.
My 7 year-old has gotten into music and is trying to record his own ideas. We have found the existing tools to be either too simple (Yak Back) or way too complex (Tascam). I want to make him something that has a simple interface, few buttons, and simple recording/mixing. The idea is to avoid the software programs like Garage Band and Logic.
Spent 14 years slogging through a custom implementation with my previous company, and didn't want my pain and suffering to go to waste. Just spent a few hours yesterday to replace that app's integration with my new api and got a pretty good diff:
117 files changed, 258 insertions(+), 10032 deletions(-)
I recently launched a free newsletter where I'll be sharing one platform every day with pro tips based on my experience for the next 100 days.
Check it out here: https://topsaasdirectories.beehiiv.com/subscribe
Idea born out of my own frustration at finding typos at my prior company. I wanted a tool to crawl my website daily and uncover new errors. That’s how TripleChecker was born.
But, I'm determined to see its completion even if there is just one user. I didn't take the Wordpress fiasco and how they handled it, lightly at all and it only fueled my motivation even more. ETA is by end of this year right on time for Christmas.
If you'd like to read more, here's an article about my CMS: https://medium.com/creativefoundry/what-i-learned-as-an-arti...
If you'd like to get Beta access, my email is listed in my profile.
You can join the beta https://testflight.apple.com/join/LEJk313o
When I can, I am also working on some features for https://midicircuit.com Beta here - https://testflight.apple.com/join/pNyAUEac
Plugin to convert Figma designs to React Native code fully client-side.
And a complimentary service that syncs the code directly to your filesystem in real-time, as well as an optional MCP server to flex the generated code to your codebase to fit your framework/libraries.
Source: https://github.com/kat-tax/figma-to-react-native
(includes cool tech like lightningcss-wasm for styles conversion and esbuild-wasm for client-side previews)
That's why I am building Overcentric - a simple and affordable toolkit that combines web & product analytics, session replays, error reporting, chat support and help center - all in one place.
Been building it and testing with several startups and improving based on their feedback. I am also using Overcentric for Overcentric itself, so I always get ideas for improvement.
What's next: more tools that are useful for startups are on the roadmap and I am exploring how LLMs can be further utilised (apart from support, session replay summaries, aiding in writing help center articles) and refining pricing.
Check it out at https://overcentric.com/
Would love to connect with other SaaS founders and have Overcentric help them grow their startups.
I need to put it up on the ol' blog-thing, but I've signed a contract with a small press for a debut novel, which is highly exciting. That one's urban fantasy from the point of view of the wizard's magic cloak. (You better believe it has opinions.)
Meanwhile, I've been working on a novel about a group of time travelers who accidentally get stuck in the Permian, well before the dinosaurs. Surprise! There are still big animals that can eat you, they're just more weird (and not as big). The research for that one has been wild.
The ol' blog thing, where I post story-related tidbits and such: https://rznicolet.com
Mostly to learn some Rust and because I thought most of the features of Splitwise worth paying for would be fun to build. Been loving working in Axum and getting to implement some fun database things
This week, we’re doing a 5-day launch week, where we’re shipping a new set of billing features every day. Github link: https://github.com/flexprice/flexprice
[0]: https://blog.alexbeals.com/posts/extracting-letterboxd-token...
[1]: https://blog.alexbeals.com/posts/reverse-engineering-ios-dee...
[2]: https://blog.alexbeals.com/posts/debugging-fitness-sf-qr
[3]: https://blog.alexbeals.com/posts/start-process-extensions
Most of the documentation I read seems to have been created by a sleep-deprived robot in a stand-up or by a caffeinated squirrel with memory issues. So, I am searching for a voice to bring something different to talking about broken pipelines, observability bills expanding faster than my waistline, and heroic config file linting for the impatient.
I aim to make writing (and reading) my documentation tolerable (and perhaps even FUN!). I hope to make the next person who has to read my written word laugh and absolutely confirm my clear lack of sanity.
It’s built in nextjs and Django, with integrations for OpenAI, perplexity and all bedrock models. And MCP of course.
Feedback and requests welcome, I’m terrible at marketing so we have very few users but we use the platform ourselves and we’re super happy with it.
Curious!
Last year PlasticList discovered that 86% of food products they tested contain plastic chemicals—including 100% of baby food tested. The EU just lowered their "safe" BPA limit by 20,000x. Meanwhile, the FDA allows levels 100x higher than what Europe considers safe.
This seemed like a solvable problem.
Laboratory.love lets you crowdfund independent testing of specific products you actually buy. Think Consumer Reports meets Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid's snacks, whatever you're curious about.
Here's how it works: Find a product (or suggest one), contribute to its testing fund, get detailed lab results when testing completes. If a product doesn't reach its funding goal within 365 days, automatic refund. All results are published openly. Laboratory.love uses the same methodology as PlasticList.org, which found plastic chemicals in everything from prenatal vitamins to ice cream. But instead of researchers choosing what to test, you do.
The bigger picture: Companies respond to market pressure. Transparency creates that pressure. When consumers have data, supply chains get cleaner.
Technical details: Laboratory.love works with ISO 17025-accredited labs, test three samples from different production lots, detect chemicals down to parts per billion. The testing protocol is public.
You can browse products, add your own, or just follow specific items you're curious about: https://laboratory.love
Where can I find the link? Do I need to submit my email to see the "openly published results"?
Wow, thanks for the heads up, website. I'll throw out my stock of these right away.
Do you have an arbitrary date we should use to ignore items for testing?
Specifically, rice seems to contain a good deal of arsenic (https://www.consumerreports.org/cro/magazine/2015/01/how-muc...) and I've been interested for a while in trying to find some that has the least, as I eat a lot of rice.
BTW I love Consumer Reports.
IIRC2, don't buy rice from land formerly used to grow cotton. Because calcium arsenate was used to kill the boll weevil.
https://en.wikipedia.org/wiki/Boll_Weevil_Eradication_Progra...
Vanilla (high): https://laboratory.love/plasticlist/59 Strawberry (medium): https://laboratory.love/plasticlist/60
0: https://i.imgur.com/L1LVar1.png
Edit: I guess that should impact the Substitutes category, though, and not the Phthalates category.
We originally started supporting Low-code solution called Mendix. Now we support any type of web app that can be packaged as an OCI image.
You can read or try it at: https://low-ops.com
I'm currently adding support for letters in addition to Postcards
In the same way pilots get put in emergency situations in flight simulators, I'm building an "SRE incident simulator" , a generalization of SadServers.
Basic goals:
- Web based for zero update latency
- Have it work offline
- Automatically import transactions from my banks
- No running/hosting cost
- Secure
Tools used so far:
- InstantDB for the datastore, providing the offline capability too
- A gmail account that automatically gets forwarded bank alerts for purchases
- Gitlab.com w/scehduled pipelines for cron based email-syncing
- Netlify for the free hosting
- InstantDB magic codes / email links for securing the data
I'm at the point where I can track and categorize purchases, including split transactions.
Next steps:
- Add in date ranges for reporting / data views; e.g. show expenses incurred in a one month period instead of for all time.
- Add in planned / project transactions for month forecasting
- Statement import & import reconciliation and statement reconciliation
- Scrape company specific digital reciept emails (like Amazon) to autopopulate more transaction data
And that'll be the end of the stuff I can do for free. I think I will add features that require money and/or dedicated hardware though:
- OCRing receipts -> autopopulated transaction data / description
- Using chatgpt to suggest categorizations
- Scrape extra data from my bank sites, like physical addresses of entities involved in charges.
The code is all open source on GitHub[1]. Really close to shipping now - hope to share launch details soon.
These monthly HN threads have been great motivation for me to keep building consistently. Thanks everyone!
By analyzing game statistics, we are giving players a new way to improve their game.
This is especially valuable in workflows where verification of LLM extracted information is critical (e.g. legal and finance). It can handle complex layouts like multiple columns, tables and also scanned documents.
Planning to offer this both as an API and a self-hosted option for organizations with strict data privacy requirements.
Screenshot: https://superdocs.io/highlight.png
What's different about it is that we've figured out character consistency with AI generated images, as well as text legibility. Most AI models don't do small text very well, and don't do consistent characters. We've tried to fix that.
There are lots of clinics around the world with X-ray machines but no way to easily share the images or radiologists to read them. I’ve gotten the price for reading an X-ray to under $1 and piloting with hospitals in East Africa.
Most employee engagement software is just placation for HR. When it's common that the lowest scoring question on feedback cycles is "I believe that action will be taken based on the results of this feedback," there's something fundamentally broken with how companies handle feedback, and how the tools their given enable them to react to it.
Our end goal is to help leaders and managers identify problems with trust and communication within a team. The reality is, 90% of the time, the problem lies with the leadership itself. We're trying to provide both the tools to diagnose what the problems are, and frameworks for managers to fix them.
Handles everything from real-time driver tracking, public order tracking links, finding suitable drivers for orders, batch push notifications for automatic order assignment, etc.
Backend: Feathers.JS, Postgres + TimescaleDB & PostGIS, BullMQ, Valhalla (for multi-stop route optimization although most of our deliveries are on-demand)
Frontend: SvelteKit
Mobile App (Android only for now): React Native/Expo, Zustand, Expo push notifications, and two custom native modules for secure token storage and efficient real-time GPS tracking. The tracking was probably the toughest to get right to find the best balance between battery/data efficiency and more frequent updates.
Been testing it for a couple weeks and as of last week, that company moved their operations over to it with 50+ drivers and thousands of orders processed through it so far (in a country with pretty unreliable connectivity/infrastructure).
I built it initially as a favor but open to other applications for it.
That's a hell of a favor. Is this something you built by yourself or were you part of a larger team?
I built it all myself (including the integration with our ordering platform) It was sort of my white whale project that I've always wanted to do but didn't have the chops/time.
The advancements in AI-assisted coding encouraged me to give it a shot though and the results turned out great. It was a heavily supervised vibe-coding project that turned into a production-ready system.
The balancing markets are used to keep the power grid in good shape, by smoothing out any last minute mismatches between energy production and consumption.
The project started out of frustration of not being able to get this information without friction.
- Scriptless AI web interface in TS
- Custom static site generator in TS
- Local app-less notification server for iOS
- Minimal websocket-based daily note taking app
The process itself is extremely time consuming, when done manually. My application speeds up the process by a factor of 50 up to 150, depending on how you measure it.
Finally it allows "everybody", to find the 0DTE trades, that are really profitable - something, that currently only the "Pros" can do.
I am experimenting with the current SOTA multimodal LLMs, but performance is still not yet there, they still hallucinate non-existent teeth. (As an aside, I have found a simple but very telling test, I have an image with only 4 teeth visible up and 10 down, so I prompt the modal to count, non have been able to, but Gemini 2.5 pro is the closest of the lot, performance is worse in the description when the counting test fails).
I am going to try segmenting the image to see if I will have better results by prompting to describe segment by segment.
- LegalJoe: AI-powered contract reviews for startups, at the "tech demo" phase right now: https://www.legaljoe.ai/
- ClipMommy: A macOS tool to help (professionals who record a lot of videos | influencers) organize their raw video clips. Simply drag a folder of "disorganized" videos onto ClipMommy, and ClipMommy organizes the videos into folders / subfolders, adding tags, based upon some special statements that you can make at either the start or the end of your video (think audio-based "clapboard"). I'm expecting to release this within a week or two on the Mac App Store (Apple allowing...).
As an aside, I've been very impressed with Claude Code, it's (for me at least!) leading the way for how the next generation of business software might leverage AI. I plan to iterate on LegalJoe to make more "agentic" as a result of what I've seen is possible in Claude Code.
I would have liked to also provide a Google Doc plugin, but the Google Docs APIs [1] don't provide the required capabilities (specifically: a way to create tracked changes). Word's Add-In APIs [2] are also limited in some regards, but since they let you manipulate raw OOXML, you can work around those limitations for the most part.
[1] https://developers.google.com/workspace/docs/api/how-tos/ove...
[2] https://learn.microsoft.com/en-us/javascript/api/word?view=w...
https://github.com/rishighan/threetwo
Think of it as a Plex for the digital copies of comics. Point it to a folder full of comics, and it will infer metadata, and present your collection in a Plex-like manner.
ThreeTwo supports Comicinfo.xml, Metron's format. Generally there is no universally agreed-upon metadata format for comic books, comic book archives are essentially .zip or .rar files with images with a fragmented naming convention. ThreeTwo itself uses regexes to parse filenames and match that against ComicVine to extract metadata from there. This is currently the problem I am trying to attack.
Other than that, it integrates with DC++ via AirDC++, and also incorporates an OPDS server.
An AI native issue tracker without manual task management
Right now it's able to collect data from more than 30 sites with all very funky html formats with no custom code for each site.
When I began I had around 20% errors/hallucinations, right now it's way lower at around 3% errors in extraction. It's been fun and gave me a lot of experience building LLM powered data pipelines.
Now I am getting a lot more time to focus on creating content about my journey and sharing it. Test coverage is still pretty bad but I do not feel the generated product is worse than it would have been if I coded it with or without LLM assistance. Right now, I barely see generated code.
Elevator pitch is: A simple searchable directory of various procedurally generated toys. Think Boids, Game of life, Maze generation, terrain generation, etc. written in Ts/Js. Anyone can contribute and will get their page for their implementation of a given ProcGen.
This is optimized for
1. Hobbyists wanting to make a ProcGen and have it be publicly available
2. Game Dev's & Academics looking for inspiration
3. Students / Amateurs looking for a project to add to their portfolio. It's specifically aimed at making the barrier of entry for your first "out-in-the-world project" as low as possible.
Long term vision will include bounties i.e. "I'm looking for a terrain generation algo that makes one main island surrounded by 6-12 smaller islands, some connected by a bridge, and every island should have an organic coast line with coves & bays and stuff".
There will be a voting system so clean, polished, well documented implementations of a given algorithm float to the top, (i.e. Game of Life) might get procgenzoo.com/CellularAutomata/GameOfLife.
The plan is to keep this free forever, and hoping donations cover hosting fees.
---
I'm also working on BackPackReact. Which is an inventory management game where the placement of various components inside yourback will create & consume resources to power your jorney to the next trading post. I.e. Fire/{HeatSource} and Ice/{ColdSource} on either side of Thermoelectric Generator will generate Power, which enables your vehicle to keep moving.
But it's a balancing game, the more space you use for your machinery, the less space you have for inventory for your cargo. You want the most efficient "engine" but also enough supplies to handle any unexpected events.
Is it better to build a nuclear reactor? Or just fill up on wheat and rent & feed a horse to pull you to the town, so you can sell all excess wheat you didn't use? Should you spend money to gather intel on the trading price of Iron is at your destination city? Or pick up a contract to build an electric grid at the new settlement, which will require many trips but yield one large payout?
---
Would love feedback on either of these ideas :) & if you would contribute to or play either.
Find every competitor to your saas/product/service/business in minutes! Beats the pants of Gemini/Calude/OpenAI deep research for this very particular use-case.
Specialized deep research agent for discovering competitors and understanding your market.
It supports multiple languages, currencies, European VAT deductions, and more.
I built this tool for myself so it’s kinda like a personal software. Hopefully, others will find it useful too :)
Check it out: https://easyinvoicepdf.com/en/app
Entering year three of a complete rewrite. It’s kind of ridiculous but as I’m still enjoying the process of trying to built/craft a performant and flexible file upload web component I just keep going.
V4 is live on https://filepond.com, plan to release v5 before the end of summer.
Bug: My word happened to be "will". When I typed "w", "i", "l", "l", the input area showed "wi". Since the second "l" was interpreted as a backspace operation.
General feedback, I would find a way to squeeze in variable hints. Maybe part of the definition, homophones, antonyms, 'rhymes with', etc.
And definitely find a way to get it on Northernlions playthrough. He does a dozen of these like every day.
I started by trying to reimplement the METAFONT language, adding support for real-time rendering with OpenGL. Eventually, I decided to introduce some incompatible changes, creating a new language. But it still retains a syntax and internal logic very similar to METAFONT.
This new language also supports animation, and since it is part of a larger project (a game engine), it can be used not only for font rendering but also to generate textures and sprites for games.
The language is successfully compiling to WebAssembly, and I’m currently working on a web page with tutorials, documentation and examples where you can modify the code and instantly see the results. Since this is a literate programming project, there is also an English and Portuguese version of the code. But the english version still needs considerable polishing.
Creating and maintaining an up-to-date help center is a huge hassle. In many companies there is no one that really feels obligated to take care of it.
We want to optimize this process:
- Creation: Just click through your process. We take a screenshot on every click and generate a full written article with screenshots and a GIF. You can also talk while recording to add additional context.
- Maintenance: Connect to your tools (GitHub, Asana, Slack, …) and we automatically suggest changes to your docs if your product changes.
- Consumption: Users can consume the content as they like: Read the docs themselves or ask a Q&A bot.
At the moment the creation and consumption parts are already working well. Now I’m working on the maintenance part.
1. After hearing Cell by Pannotia, I became obsessed with trying my hand at making a bit of electronic music. I have an Arturia Keystep 32, a Korg NTS-1 and a Korg SQD-1 to mess around with, but I'd really love to learn how to capture the sound on the Pannotia's album since it speaks to me on a visceral level (album link for the curious: https://pannotia.bandcamp.com/album/cell)
2. Turning some old telephones into fun "audio guestbooks", have some additional features lined up that I am going to add (just waiting on parts to arrive), trying to improve a bit on the ones shown in this excellent video: https://www.youtube.com/watch?v=dI6ielrP1SE
3. Managed to get a blog post up recently. My work is not exactly what I would call "HN worthy" but if you need a laugh or some decent toilet reading, it probably qualifies (my blog: https://futz.tech/)
I love these threads. So many people working on so many different and interesting things. Renews my hope for the future, a bit.
As there is no open source version of Excel except Libreoffice, working to build the core Excel functionality with other open source packages. Then bringing in agentic editing functionality for real world data.
What is also has been interesting is to introduce banker/consultant formatting guidelines to the agent and making it beautify its work whether in tables or models.
Why? I don’t like Discourse and Flarum that much. I want an even simpler solution with fewer bells and whistles.
But I guess the market is dead anyways for forums. I might replace my phpBB instance that has been running for 15 years.
I can't remember a time where it's felt more fun to decide "I'm just going to make this web thing the way we used to make web things."
My wife (who is a psychotherapist) started this and I am helping her with this. We are using specially curated children's books as a medium to talk about social, emotional and psychological aspects of mental health among adults, adolescents and teachers. We are also building communities and support groupsaround children's books.
This is in India where talking about mental health can often be a taboo subject. People who need/want to talk about this also find it hard to express and there are limited spaces which give you opportunities to do so. We found the abstractness of children's books as a great evocative medium. They also promote play, wonder and joy - aspects which positively impact mental health of individuals.
The project started with a personal journey of grief my wife experienced (death of a parent, diagnosis of other parent with Stage 4 cancer).
To that end, I've most recently been hacking on Robert Virding's Luerl (https://github.com/rvirding/luerl), working to adapt the Lua test suite to chase down some small compatibility issues between PUC Lua and Luerl. While Lua is a lovely language, it would also be swell to get Fennel working under Luerl. I wrote a game for the LÖVE jam a few months ago in Fennel and it was a pleasant way to dip my toes into lisp-likes.
I've also been adding things to control plane software, Overworld, here and there: https://github.com/saltysystems/overworld Happily all of the Protobuf and ENet stuff that I've already built nicely carries over into the LÖVE world.
Now I have wonderful crashes and hihats cutting through the guitars and bass, without the snare and kick overpowering everything. This also taught me some insights about balancing relative volume levels and/or lowering dominating buses against each other and compensating upwards on some upstream bus as necessary, which I think also improved the balance of the entire rhythm section.
Except, now I'm kind of unhappy with my kickdrum sound. Some of the bands I listen to and saw at the festival I just was on have some amazing, epic kick drum sounds. It's like a giant mountain troll hammering into the gates of a castle and - on the right PA - kicks you right in the gut, literally. We had a good laugh a few days that some dudes jacket moved with the kickdrum. My kickdrum currently sounds more like wet cardboard flopping against a wall though.
Besides that, I'm however looking at moving some of my notes on audio engineering on linux onto a blog to end up with something like Protondb at a smaller scale, as well as some of the steps and things to do to get audio plugins working on linux, what audio plugins work well, which I could not get to work. I am just realizing, I need to learn quite a bit more about wine, architectures and such to write good articles and ideas about this. But maybe that's my perfectionism speaking.
If you're reading this and are yelling "But what is the magic?", the magic is largely called yabridge. Many simpler plugins just work with that. It may not be up for professional audio engineering, but it certainly is up for home recording and dabbling around.
It's B2B only - can't register with a free email provider, gotta own a real domain -Therefore identities are collective - companies, not company employees --Therefore all interactions are persisted at the org level rather than assigned to individual inboxes
-It allows you not just to talk but also to work together on contracts. We built a contract parser that turns contract clauses into smart, plain language objects
We're calling it Geneva and doing a friends/family/acquaintances exploratory release as I type this.
http://genevabm.com http://x.com/genevab2b https://www.linkedin.com/company/genevabm/
And at the same time working on getting the first play testing version ready for our new geo-location based game also about birdwatching.
I build an iOS app (https://quenchai.app) that uses a carefully constructed Multimodal LLM workflow to convert photos to standard drinks and track consumption over time.
Did you know a standard margarita is ~2.5 standard drinks and a light beer is ~0.8?
I don't feel like this needed an AI solution.
It's wild to me how you jumped down my throat about not looking into his product enough when you did read my entire comment.
I scraped HN's 1000 most mentioned books and visualised them. This month I used a new embedding model (Nomic), switch out UMAP for PaCMAP, and added automatic cluster labelling.
The clustering and dimensionality reduction aren't quite as stable as I'd like, but most seeds give decent results now.
One interesting lesson is to see the effort involved in acquiring new customers and setting up funnels, especially when bootstrapping with a small budget. Sometimes, as developers, we are in our bubbles and don't realize how much skill one needs to figure out the customer acquisition domain.
A platform to host virtual races for fundraising events. Think Race Nights from the 90s/00s. Currently working on this with my brother, as we've both been out at risk of redundancy.
Had our first event over the weekend. Now we're focusing on marketing to local social clubs, charities and school groups.
Got a long list of future features and improvements, but this is a solid MVP. Made in react with redux, Pixi.js and prisma with sqlLite for a db.
I've been basing one of the biggest financial decisions in my life - whether to buy a house - in large part on NYT/NerdWallet Rent-Buy calculators. But when I dig in, it seems that the model is both extremely sensitive to home/S&P500 growth assumptions, and that their defaults aren't well thought through.
This site is my attempt to organize my thoughts on what reasonable defaults should be, and provides an interactive tool to explore housing and S&P500 growth historical growth rates.
I'd appreciate feedback!
I think what you're really comparing is if the stock market or the housing market is a better investment, but you're not taking into account that the use of the property has value.
Think of it like a landlord, you're not just investing in a house to let it sit empty and then sell it later, you're buying it with the intention of collecting rent every month. Or to put it another way, it's like comparing the prices of dividend and non-dividend stocks without accounting for the dividend.
For a personal home, you need to account for the fact that owning it means you live there rent free. There's a monthly cost for the mortgage, but that cost doesn't increase with inflation the way rent does. Owning a home comes with expenses for upkeep and taxes, but once the mortgage is paid off those are the only thing you have to account for.
That's what the rent/buy calculators are doing! It's summing up all the cash flows for owning a property (down payment, mortgage, taxes, maintenance, etc, and then crucially selling it after 30 or so years) and for renting a property (rent, and investment income from money that would have otherwise went to down payment/mortgage), and telling you how the results differ.
All I'm doing is tweaking 2 of the parameters of these calculators: The rate the home appreciates in value, and the rate cash investments appreciate in value. Everything else stays the same.
On my own finances, plugging my "preferred" numbers into the NYTimes calculator along with a plausible house price and financing that I would buy, changes the rent/buy difference by more than $1.5M over 25 years (!!!!).
You're not doing anything wrong, you're just plugging in some more accurate knowledge about the local housing market that you have. The calculator had to use some sort of assumptions, so they seemed to have gone with medians or averages that made sense at the time.I tried playing this game too when I bought my house. I ran Monte Carlo simulations which concluded that buying a house was a bad idea, based on historical data. Plus, this whole new "Covid" thing was surely to crash the housing market, right? I ended up buying a house anyway, and found out a little later that my projections were completely wrong. You can't predict the future, after all...
Instead of wading through endless lodge options, park fees, and confusing seasonal details, you just share your interests (big cats, birding, photography, budget, luxury, etc.), and our AI planner (plus input from local experts) builds a detailed, day-by-day itinerary for you.
We also show realistic price estimates and handle all the local logistics, so you can focus on the adventure, not the spreadsheets.
If anyone here has struggled to plan a safari (or has feedback on what would make it easier), I'd love to hear your thoughts!
Website: https://www.greatriftsafari.com
1. When I select the start date, maybe autofill the end date with 2 weeks or so. 2. I dropped my email, but that is not something I enjoyed doing. 3. I think there should be a clear reason what is expensive and what not. My 2 week itinerary was 25k. I have no idea if this is expensive (probably not), but to me this feels insane.
Vibe Interview simulates real job interviews using AI. Master every interview stage, from recruiter to technical rounds. Reply and I'll give you free minutes for call simulations.
The goal is quite simple, allow developers to host their application with easy straight forward pricing. We are about to launch very soon. Everything is built on Laravel/PHP.
We are open to beta testers, so if you feel you want to test this please drop me and email in my profile.
Currently available: 274 million domain names across 1570 domain zones.
Domain lists are updated daily.
Download via website or via HTTP REST API.
Can be used for parsing, marketing, automation, research and whatever else.
What are your product ideas for the future?
Thought already about a business model?
Regarding the business model, I’m already offering a subscription for full access, but also thinking about adding discounted annual subscription.
Regarding product ideas I gonna add list of compromised IPs/domains very soon to this project. I am also working on a much more complicated product that is in closed beta now (please let me know if you want to take a look at it, I'll drop a link here) and I am always open to feedback or ideas!
Thanks!
Still thinking of how, what and when to open source.
There's a little inspiration from the MLIR ecosystem here, which makes heavy use of `.td` files for code generation. I want to write a schema file, defining some tables and indexes along with the queries and procedures which will operate on them, then have this compiled to a C++ header file I can include, where the schema is a class and the queries/procs are methods.
I have no idea how far I'll get with this, but it's always fun messing about with weird little languages, and I'd like to see what programming in this style would feel like.
This means that it can cross-compile C and C++ programs that use the libc (glibc or musl) as well as the C++ stdlib (libstdc++ or llvm-libc++) out of the box without any kind of sysroot.
Supporting grid, multiplayer, predictive moves, item locking and more.
https://github.com/brokenrockstudios/RockInventory
It's been interesting and challenging. Probably the most important part is I've been learning a lot.
It was a lot of fun earlier on but it's becoming less fun the more I work on it.
1. Upload photos of your family members (or describe them if you don't want to upload)
2. Select a topic
3. See draft book
4. Make edits if you want
5. Order book
6. Read book to your kids
7. Read book to your kids
8. Continue on loop
You can drag and drop links from YouTube, Twitch, TikTok, or Kick.
You can watch multiple streams at once in a grid and/or navigate quickly and smoothly from one single stream to another.
You can add or remove streams, save mixes for later, and share mixes via URL.
It works best on a really big screen and it's decent on a laptop. Phones aren't really supported at this point. If you have a large, secondary monitor off to the side, that's ideal for passively viewing a lot of streams. Happy to answer any questions.
Tech stack: - Python + opensubtitles.org for the data pipeline - Whisper for speech recognition - React Native for the mobile client
Current state: tech demo. The app works fine and already helps a lot — for me and my wife (both non-native English speakers) it makes watching movies in Dutch cinemas much easier, by showing English subtitles on our phones instead of the Dutch ones provided by the theater.
The biggest issue now is subtitle quality and legal status. Opensubtitles provides a lot of data, but the quality is often questionable, and the legal status is rather gray/black.
Any legal or data-related advice would be appreciated!
Lately, I've noticed that my (beefy) server is always clogged with background jobs that tend to run longer than they used to. It’s started impacting operations, as customers have been complaining about their backups running a bit late.
We're network bound, so I can't just add more compute power (Notion's API has a rate limit of 2700 req/15 mins). I suspect we're being getting rate limited left and right, which is causing these delays.
Features:
- Local. No internet connection needed.
- Manual. Every transaction is added by the user.
- One-off or arbitrarily recurring transactions.
- No lock-in. Check out your data any time.
- Arbitrary metrics to track performance.
- Hosting on the cloud for mobile access.
Why?I've been using Google sheets + forms for the last 8 years to track my finances. It's worked well, except for minor inconveniences. This app is my answer to my own problems.
Most recently have been focused on better geographic visualizations in the public studio for people to experiment - getting decent automatic lat/long, want to have easy path visualizations (start/end, etc). More AI-accelerated options as well, especially around model authoring.
Repo: https://github.com/trilogy-data/pytrilogy Studio: https://trilogydata.dev/studio-core/
I've been working on making it easy to drop in socket-based multiplayer with "channels". Players can join channels and they can share messages, state updates or notifications over a socket connection. You can use it for chat rooms, lobbies/matchmaking or async multiplayer.
One recent addition is "channel storage": a shared space for players to read/write/update/delete data. This opens up saving and loading shared worlds between players in just a few lines of code.
Everything is open source, including the frontend dashboard, backend, Godot plugin and Unity package. GitHub here: https://github.com/TaloDev.
Earlier I had some success with a couple of srats, but they aren't working any more. Idea is to have an arsenal of strategies and use whichever is performing better based on recent back tests.
More than the trading part, the fact I can leverage some ML in these interests me, plus quite fascinating how helpful llms have become especially for python programming.
Conclusion: The EMH in its weak form is correct.
Buy, and hold. Work for your money. Sleep well.
1) Highly Sybil resistant. Neither the keypair owner nor anyone else can re-use the same underlying ID to link to another keypair.
2) Very high anonymity. While the Sybil resistance requires a nullifier representing the underlying ID to be present in a database (or stored in a public, decentralized form for blockchain use), there is no way to connect that nullifier with the keypair. Even if someone were to use brute force to successfully connect the nullifier with a specific underlying ID, such as a passport, there is no way to connect that ID with the keypair. (In the passport case, even merely brute-forcing the nullifier could only be done by the issuing government, someone who has hacked the government database, or someone with physical access to the passport. This is due to the fact that other passport information than the passport number is included in generating the underlying zero-knowledge proof.)
I understand that other technologies may have similar end-functionality, but this has the advantage that most of the functionality is encapsulated in a single Rust executable that could be easily used in any context, whether distributed or decentralized. (If anyone would like to know more, my contact info is at garyrobinson.net.)
In fact, now that I think about it, zk-proof identity will be required in the near future since so many poorly run organizations are leaking ID documents.
We aggregated half a dozen plus disparate data sources to create a comprehensive infrastructure map of the PNW power grid. Our goal is to be able to query for and provide informed answers for grid operators, investors, and other energy adjacent businesses in the space.
(For reference): The PNW has the most abundant clean power in the US and is one of the markets with most opportunity as our consumption increases with AI.
When you post a listing (e.g., on Facebook, Kijiji), you get tons of “Is this still available?” messages — but no useful info. TenantFit lets landlords collect basic answers (income range, pets, lifestyle) via a public link, then ranks responses to highlight promising leads.
No accounts or sensitive info collected from tenants (landlord does not even see candidate email until they reply), just a quick pre-screen before deeper screening to save time.
https://apps.apple.com/us/app/pill-buddy-meds-tracker/id6742...
It's tested only on Android 10 and Windows 11. Bring done with Flutter, it should work on iPhone, Mac and Linux too but would need building, testing and fixing various issues found.
Had I known this would take me 3.5 weeks (dedicated time) and 6100 lines of code (including comments), I would not have done it. Ideal would have been just a week.
Currently closed source.
I think there is a gap between exploratory testing and more structured forms of testing. So I am trying to make a tool for that for myself. If I like the outcome I'll open-source it.
An ISC-licensed implementation of several Content-Defined Chunking algorithms in Golang at https://github.com/PlakarKorp/go-cdc-chunkers
Whenever you have redundant data you want to store / transfer, this library lets you perform fast content defined chunking
The levels are procedurally generated with heavy curating and additional manual tweaks. I'm also adding a narrative later to each puzzle myself. It's a rare type of puzzle, since few puzzles have means to convey any kind of narrative.
My next big additions will likely be a tutorial, and profile page where you see your results and how they compare to other players. But this being just a side thing, it's progressing really slow...
Recently relaunched Clares.ca, a free website for Canadian amateur radio training.
The new site has modern basics: Fast and mobile friendly and will soon incorporate the latest updates to the Canadian test bank.
Additionally, I’m adding progress tracking, logins and notifications to keep users engaged. The previous version of the site was just the course and nothing else. This one is more usable.
It gives me my once-every-five-years reminder of why I dislike .NET.
The tool will support four annotation modes: Box, Polygon, Mask, and Keypoints — each with its own dedicated panel. You can switch modes by clicking the color-coded buttons on the toolbar, complete with smooth transitions. Labeling is a tedious task, so a bit of satisfying UI action here and there can't hurt.
It will also export labels to all major formats — and can (re)generate any sidecar file structure when needed.
While apps like Parkopedia and SpotAngels tackle the same problem, their one-size-fits-all approach often results in incomplete, missing, or outdated data. My approach is different: go deep on one city at a time by combining multiple publicly available datasets. This doesn't scale horizontally since each city has different data sources and formats, but the goal is to become the definitive parking resource for one city, build automation to keep it current, then methodically expand city by city.
If you are based in Vancouver, do give it a go. Your feedback would be awesome!
We built it because managing cloud budgets often turns into a spreadsheet mess, or worse, a never-ending consulting engagement. OneBliq lets you:
* Split and allocate Azure costs by cost centers, teams, or projects
* Visualize current spend and attention areas at a glance
* Experiment with plans and projections without complex tooling
* Skip sales calls and long onboarding – just install and kick the tires
It's still early, but we're seeing traction with teams who want clarity without complexity. Happy to answer questions, share more, or get feedback.
Would love your thoughts – what would make a tool like this useful (or useless) for you?
I’ve figured out that I lack in terms of marketing / sales and to develop successful strategies to gain visibility. So actually enjoining the summer rather than coding at night / weekends but still having plenty ideas how develop it further and assist analytical reading.
http://youtube.com/@dreamwieber
In parallel I'm working on a bunch of apps for Vision Pro -- my most well-known at the moment being Vibescape which was featured recently by Apple: https://youtu.be/QcTiDBtCafg
To round this out, my wife and I are converting a historic farm in the Pacific Northwest to regenerative agriculture practices. So far we've restored over 20 acres of native ecosystems.
If that's interesting to you there's a channel here:
To highlight the syntax in the browser I checked out the CodeMirror project that uses Lezer grammars. It is very flexible and allowed me to implement additional features like custom folding. [2]
I would also like to create a grammar for tree-sitter, finish the Java implementation and documentation of the ESON parser before I try to implement it in other languages.
[1] https://gitlab.com/marc-bruenisholz/eson-textmate-bundle [2] https://gitlab.com/marc-bruenisholz/eson-lezer-grammar
Strongly recommend the rust remover described by Backyard Ballistics[0] on his second channel[1]; 1 liter water, 100g citric acid, 40g washing soda, generous squirt of dish soap. He claims the acid and alkali cancel out so there's nothing to attack the normal metal surface, but they leave citrate ions which dissolve rust by chelation, which makes it better than just citric acid, vinegar, or soda alone, which all pit and dissolve the clean metal surfaces, and easier/better than wire wool scratching. He also claims it's as effective as EvapoRust but much cheaper and can do more rust dissolving per litre than EvapoRust.
[0] https://www.youtube.com/@Backyard.Ballistics - restoration of old and very rusty guns
[1] https://www.youtube.com/watch?v=fVYZmeReKKY - "The Ultimate HOMEMADE Rust Remover (Better than EvapoRust)", Beyond Ballistics channel
An AI-native DocuSign
It's been around a month I've been working on it. Struggling with getting people to actually use it - this week I've set the ambitious goal of 10 new contracts sent *and completed* by people I don't know (last week's was 10...by people I do know).
It's hard because I feel I'm in a weird hole - in order to have a good product I need people to use it and give me feedback, but in order for people to use it and give me feedback I need a good product. It's like wth!
Another thing I'm struggling with - enjoying the process. I get daydreams like mad. I feel I'm always living in the future in some way, especially with this software, and it's taking away from being present in this work. Which sucks, because I want to be excited to *work* on this and NOT fake my own excitement towards this as a manifestation of my greed to get rich off it.
But MAN am I greedy. It's ugly sometimes, to myself.
But god how I love to work on software also. How I love making stupid bash commands on my terminal. How I love to feel like the old gods, who conquered the infant digital world.
It's a Google Meet attendance & chat tracker, and it's starting to pick up a bit. A few teachers & other people are using it and enjoying it which is really awesome!
With 20+ years of experience building an enterprise software platforms, I have seen firsthand that trying to bolt AI onto legacy systems is an architectural dead end. It’s like building a state-of-the-art 'smart penthouse' on top of a 100-year-old brick building. The foundation wasn't designed for the weight, the wiring can't handle the power demands, and you get a high-tech facade on a crumbling, inefficient core.
We decided to build the modern skyscraper from the ground up, designing the entire system around three core principles:
1. A Unified State Machine: We started with a single, transactional data model and a core set of APIs that can represent any business object or workflow. Everything from a customer record to an approval process is a primitive in this system.
2. Language as the Primary Interface: Natural language isn't just for Q&A; it's a first-class citizen for commands. A prompt like "Create an app to track sales leads with fields for status, deal size, and owner, then add a 3-step approval workflow for deals over $50k" directly executes against the core APIs, modifying the actual schema and logic in real-time. No consultants needed.
3. True Agentic Execution: Our AI agents are given credentials to this same core API layer. You can delegate multi-step, stateful tasks ("When a new lead is assigned, notify the rep on Slack, schedule a follow-up in my calendar for 3 days, and generate a draft outreach email using our template"). The agent executes this by making the same API calls a human developer would, but with the flexibility to handle variations.
For the nerds, here's the tech stack we're using to make this happen: The backend is built in Elixir; the BEAM VM's actor model and fault tolerance are perfect for managing thousands of concurrent agents and workflows. For performance-critical parts, we drop down to Rust via NIFs. Crucially, all custom logic — whether generated by an AI agent or a human — is compiled to WASM. This provides a secure, high-performance sandbox, giving us language flexibility and near-native speed for all automated tasks.
We're moving from a paradigm of "users hunting through menus" to "users delegating real work." It's an ambitious mission, and I'd love to hear what the HN community thinks of this philosophy and architectural approach.
Work in progress...
This is for people who feel powerless in light of all the recent political developments, and would like to do something positive to help.
My goal is to aggregate all the various ways you can actually do something to help, so you can find them without having to get on a million mailing lists.
Automating Clean-room plant propagation using robots
There are about 2-3+ Billion plants cloned in laboratory conditions per year which are all done by hand. I am in the process of trying to develop a MVP to automate this task while also getting customer conversations to get early adopters.
What I am struggling with is that I don't know if I should focus on developing the MVP which will cost 20k-40k & 4-6 months to develop or put in place a pilot program to get customers willing to buy the machine / pay up front before I start developing. Hardware startups are rough usually because their MVP takes so long to develop.
I am currently bootstrapping while I am pushing for more conversations trying to do both at once. I could personally finance the venture, but it seems like a poor move to just take on all the risk personally? I have am setting up conversations with a few VCs, but that is a month out.
I'm working on this full time at the moment. I have a couple people who I have talked to who could be co-founders but nothing has materialized yet. So I am just all over the place at this stage in the process.
I spoke to 4-5 potential customers and 2-3 of which are 'interested' in what I have but seem only interested in the 'validation' stage which only comes up after the huge personal investment on my end.
Brief backstory: While visiting us overseas, my in-laws were in a very bad car accident. Everyone involved is alive and going to be okay. But what followed was a series of emotional, physical and logistical challenges that pushed my wife and her parents to their limits.
During this time I found myself (shamefully) hiding on my phone. I was obsessively refreshing for updates from insurance/hospital teams, sending empty messages, and mindlessly scrolling feeds. My screen time was averaging 12 hours a day. Time I could have spent being fully present with my wife and her parents.
I finally accepted I have a serious phone addiction. I tried Apple Screen Time and a few popular screen time management apps, but found the blocks were too easy to bypass, and some apps were as useful as they were distracting depending on the context (e.g. YouTube). I didn’t necessarily want to use my phone less: it’s an incredibly useful tool, and the distractions were sometimes helpful.
What I really needed was intentional stretches of time spent away from my phone. I built touchgrass.fm as a simple way to record and incentivize those stretches of time. It’s not quite finished, but it’s been helping me stay present for hospital visits, meals and important conversations.
Autonomous robotics for sustainable agriculture. Based in the south of the UK. Prototypes of an autonomous mechanical farm-scale weeding robot currently beginning real-world testing. Still a huge amount of work to do though.
Hardware and software developed fairly much from scratch, not using ROS (for not entirely crazy reasons...); everything written in Rust which I find well suited to this application area.
The robot is built using off-the-shelf components and 3d-printed custom parts, so build cost is surprisingly low, and iterations are fast (well, for hardware dev).
On robot compute is a couple of Raspberry Pi 5s.
Currently using the RPi AI Kit for image recognition, ie Hailo 8[L] accelerators.
Not currently using any advanced robotics VLA-type AI models, but soon looking to experiment with some of it, initially in simulation.
Feel free to get in touch if you'd like to talk :) Contact details in my HN profile, and on our website.
I have seen a few of these, but only one (about a decade ago) that used legs not wheels
Wouldn't it be better if the robot walked rather than rolled?
You may be able to illuminate this for me...
- Migrating to Niri on my laptop and re-evaluating my literate config approach, switching from xkb configs to kanata and a few other QOL changes to make my tooling more composable and expressive
- Shoring up my blog / media sharing infrastructure (migrated to a landing page on an s3 bucket, with different prefixes for several different hugo deployments for different purposes, still need to get better about actually posting content)
- Preparing to migrate a bunch of my self-hosted services to a k8s cluster which can can be fully deployed locally for testing and defined in code. All this is managed through argo and testable with localstack and crossplane for some non-local resources
- Attempting (somewhat unsuccessfully) to setup a nixos config for a bunch of services that just don't feel right to run in containerized stack that I want to live in ec2 and have as close to 100% uptime as possible (uptime kuma, soju/weechat relay/bitlbee, conduit, radicale, agate, whatever else I think of that is small and has a built-in nixOS service module. Thinking about some kind of RSS aggregating solution here as well)
- Experimenting with vibecoding by trying to get an LLM to do the legwork to build a TUI interface to ynab using rust (which I don't know how to write)
I'm hoping that by the end of this summer most of the tooling I use for most things will be way more concrete and seamless. I also want to get my workflows down and get on top of converting at least a few the ~100 draft blog posts I have laying around into something I can actually post. Ditto for my photography albums, which are not yet organized into coherent groupings or exported for web.
- Get it so that you can categorize transactions quickly in a keyboard-driven way
- Similarly have a quick, improved option for dealing with overspending / underfunding
- Add some additional reporting that I'd like to see (as well as the ability to drill down in a more fuzzy way than currently supported in ynab)
- Finally (and most importantly but also most ambitiously) develop a view with some simple tools that helps users figure out WTF is wrong when a reconciliation isn't working out. This is much harder than the other things I'm trying to do here
Luckily YNAB's API is very open and I think I can do all the things I'm looking to achieve here. If I'm successful, I plan to spin off a sister TUI project for making handling import edgecases easier in beancount, which I also use but for different reasons
Edit: but your idea of having CLI command options for printing reports on a regular basis / on opening the shell is also neat, I do plan to have some CLI options that don't require you to open the full TUI
I’m working on a name generation tool that uses 83 structured naming methods. Examples: React (Verb-based), Vue (Obsolete English), Facebook (Compound), Netflix (Portmanteau), Lyft (Creative Misspelling), Alexa (Personal First Name), etc.
I wasn’t happy with the slop generated by the overly general name generators or my own prompting/brainstorming. I went on a tangent and read the top (5) books on naming from Amazon. From there I was able to create very specific and detailed prompts which started producing consistently good names, the odd great one, and a small amount of crud.
Eventually this escalated from a large spreadsheet of detailed prompts to a side project.
Please give it a try, I’d be happy for any feedback on this early version. (I recommend the options tabs for some granular tweaking)
(The name was inspired digital music samplers where there is a lot of rapid experimentation and tweaking similar to this app)
We're building Redactsure.com
A novel technology whose goal is to separate data from interactions entirely. We are building a custom OS whose first goal is to detect, hide, and then use sensitive data throughout any web interface.
Building a new browser rendering layer with real time transformer inferencing is hard but it's been an amazing tech to work on. Long term we think this technology will change the way all remote work is done at a fundamental level.
https://github.com/saxenauts/persona
90 percent of the AI companion use cases today can work well with just a vector DB to retrieve facts, and chunks of memory, but a connected digital footprint would need a graph+vector hybrid.
memory in the coming future will not just be about fact retrieval but need backlinks of memetics, new streams of data, holistic analysis, infinite schema-less key value store, causal reasoning and other things that define "who and why" of a human and imitate neuroscience's understanding of how our identities work today. this then needs to be translated as language chunks to LLMs
benchmarking this against popular tools, on longmemeval. getting good results so far. i would love to learn from you guys, what's your take on identity and human representation for LLMs in the coming future
To abstract around register file differences in different ISAs, I'm using SSA-form with spilling to a separate "safe stack". Enforces code-pointer integrity for security's sake (not unlike WASM) but extended also to virtual method tables.
"Partial-ISA migration" allows a program to run on multiple cores with slightly different ISA extensions. "Build-migration" is migration to another build of the same program in the same address space: Instead of trying to debug an optimised program, you would migrate it to a "debug-build" to attach a debugger. Or you could run a profiling build, compile a new build using the result and then migrate the running program to the optimised build: something that previously only JIT-compilers have done AFAIK.
I'm out of the research stage and at the stage of writing the first iteration of the main passes of the compiler, but now and then I've had to back-track and reread a paper on a compiler algorithm or refine the spec. It has taken a few years, and I expect it to take a few years more.
The site itself isn't anything "special." I've had a personal website for about 25 years; the past few years I finally moved from making HTML by hand to using various CMSes. I tried a "no database" CMS that my hosting page had, then I wrote my own CMS, https://github.com/GWBasic/z3, to learn node.js, but then I had to go back because Heroku dropped the free tier.
Jekyll is interesting. As a Mac user, I'm surprised there isn't a push-button app, like MAMP, to just run it. Instead, I got exposed to some weirdness with Ruby versioning that, because I don't have any Ruby experience, was frustrating.
The default Jekyll template has warnings, but when I tried to fix them, I ended up jumping into a rabbit hole of sass versioning.
I also ended up jumping into a rabbit hole with setting up redirects from old urls on my blog to their new locations. I don't touch Apache / cpanel that often, so there was a bit of a learning curve for me.
One funny thing was that I set up two redirects, in cpanel, from the same url to two different urls. (It was a mistake!) I couldn't delete them, so I had to submit a service request with my host.
Two interesting things that I do not have time to do:
- Set up Github actions to deploy on my original host (andrewrondeau.com) - Set up redirects from blog.andrewrondeau.com -> andrewrondeau.com
A multi-server process supervisor. Existing init processes (systemd, runit, s6, etc) work great on a single server but when you need to manage/deploy many servers, the tooling gets really complicated (K8s). Phoenix extends the process supervision model from one server, to many. Run this thing once / keep one copy of this around / keep this running on all machines that match pattern X etc.
Turns out the (obvious in hindsight?) problem is automated but simple networking. Currently digging deep into wireguard based overlay networking before rolling the next version of Phoenix out.
Turn any command into an AI-discoverable MCP tool with a few lines of YAML:
name: hello-world
description: "Greets the world"
command: "echo 'Hello, World"
Any AI agent can search for "greeting" and use your tool.
I'm also building the first registry at https://enact.toolsThe insight: Most security scanners find problems but don't fix them. Industry average time to fix critical vulnerabilities is 65+ days. We generate the actual fixes and create PRs automatically, including educational content on the nature of the vulnerability and the fix in the PR description.
Technical approach: - AST-based pattern matching (moved from regex, dropped false positives from 40% to <5%) - Multi-model AI for fix generation (Claude, GPT-4, local models) - ~170 patterns across 8 languages + framework-specific patterns; can grow this easily but need more customer validation first.
Business model experiment: Success-based pricing - only charge when fixes get merged ($15/PR at the moment). No upfront costs. This forces us to generate production-quality fixes & hopefully reduces friction for onboarding.
Early observation: Slopsquatting (AI hallucinating package names that hackers pre-register) is becoming a real attack vector. It's pretty straightforward to nail and has a lot of telltales. Building detection & mitigation for that now.
Stack: Elixir/Phoenix, TypeScript, AST parsers
English is weird.
Here’s the link to try it out: https://pbrgen.aixpoly.com/ (limited spots available)
Let us know what works, what doesn’t, or what you wish it did. All feedback is so helpful right now.
My credit rating has literally plummeted 200 points for reasons I don't even have the energy to investigate (I have a pretty good retirement fund, no debts/mortgage/care payments). Just no energy to do anything except hang around trying not to eat.
It currently supports subscribing, publishing, and unsubscribing in JMS, MQTT5, Redis, and Websockets, and send-only in SNS.
I'm not really sure who might use it, but it's been fun.
https://chromewebstore.google.com/detail/relevant/fdhnccpldk...
Two months ago I posted an update that I had begun work on my Chrome extension [1] for Relevant. Relevant is a crowdsourcing website where users can categorize the channels they watch into a defined hierarchy of categories ranging from broad topics like "Science" and "Gaming" to more specific ones like "Phone Reviews" or "Speedrunning".
Although I had a little bit of engagement on the website, I found myself looking for something that could bring the experience onto YouTube, so I began work on a Chrome extension. It turns out there's a lot more complexity in building a Chrome extension than I realised. It's basically like building a website for the popup window, a javascript server for the background service workers and a message bus for the service worker.
After 2 months' of working weekends, I finally released a version of it that lets users see the categories of the content on the page, discover more channels matching those categories and contribute to the categorisation effort!
It's been a fun ride co-coding with Claude (Sonnet 3.5 > 3.7 > Code). Already it found a bunch of interesting bugs on a heap of my own sites, older employer sites, friend's sites.
Started as a simple Django web app, extended to Celery+Redis, now also leveraging CF Workers and R2 storage.
Was bourne out of an observation that some sites I have been working on missed some crucial things like domain expiration of asset-domains, mis-configured CORS or SSL certificates, http header and meta collisions, missing/wrong redirects for http/https/www/no-www etc.
chaosharmonic•1d ago
I recently shipped a first-draft UI demo that you can play around with for my self-hosted jobs tracker:
https://escape-rope.bhmt.dev
mmarian•1d ago