You select columns and then just drill down to create further joins. Change the SQL text and it updates the view.
I'm a CS undergrad would love feedback.
One of the most interesting applications for LLM's is writing SQL based on a schema, and I wonder if your tool could incorporate a "show me the books titles from authors who's name starts with T" and write that out.
Good luck!
Yes, I agree. Just as we need to check what LLMs produce when writing code, I think this could be a way to check what they produce when trying to write SQL.
As simple and personal as can be. Straight to inbox.
Optimized for coding agents DX through full customization and data appending by url parameters.
https://formvoice.com Appreciate any feedback!
I've been working on an open source LLM proxy that handles the boring stuff. Small SDK, call OpenAI or Anthropic from your frontend, proxy manages secrets/auth/limits/logs.
As far as I know, this is the first way to add LLM features without any backend code at all. Like what Stripe does for payments, Auth0 for auth, Firebase for databases.
It's TypeScript/Node.js with JWT auth with short-lived tokens (SDK auto-handles refresh) and rate limiting. Very limited features right now but we're actively adding more.
Currently adding bring-your-own-auth (Auth0, Clerk, Firebase, Supabase) to lock down the API even more.
I guess a bunch of yaml for each of the main PaaS services would be nearly that.
https://news.ycombinator.com/item?id=44627910
In lieu of chatbots as the primary means of working with AI.
This is an approach that is human centered and intended to accommodate a wide array of possible use cases where human interaction/engagement is essential for getting work done.
https://demo.snapreceipts.fyi/
Mainly used by my friends right after we have a group lunch or dinner. You just upload a pic of the receipt after a meal and it parses out the items. We assign who got what and it calculates who owes what.
Makes the receipt splitting part super easy.
Please DM me at my email: archit72<at>gmail if you run into rate limits or want more usage! I'd be happy to help you.
The goal is to have it behave like typescript for Go, where any Go program would compile out of the box, but then you can use the new syntax.
Featuring: built-in Set/Enum/Tuple/lambda/"error propagation operators"
It also have a working LSP server and generates a sourcemap, so when you get a runtime stacktrace, it gives you the original line in your .agl file as well as the one in the generated .go file.
I recently finish porting all my "advent of code 2024" in AGL -> https://github.com/alaingilbert/agl/tree/master/examples/adv...
When I wrote Go, I figured that I would eventually have to do something like that, to fix the glaring omissions in the language. And then I stopped writing Go, but glad to see that someone got around to it!
After that, I'm not sure. I have four big ideas:
1. (continuation) Another video, this one about my experiences writing a homebrew PSOne game
2. (useful) a command line tool (or native desktop app) that generates white noise
3. (fanciful) See if I can unpack FFVII's world map data into OBJ models and UV mapped textures. And then from there create a 3D world map in Threejs
4. (stretch) I would love an app where I could look out into the distance, and be informed what's on the horizon. Likewise ships in the sea / planes in the sky. I think it's doable with some OSM data, open APIs and a bit of high school math
Been fun to push Nanite and Lumen to the limit!
[0]: https://boris.kourtoukov.com/we-wade-awake-live-visual-perfo...
Earlier today I implemented "bbcodes" for bold, italic, underline, em (grey background color) and strikethrough. They way it works for bold is like this: b[text here]. If you want to apply multiple you can go bui[text here] for example, which would be bold, underline and italic text.
The most important features (for me) are: One Time Payment and 100% local and private. I don't send any data to any server. Just enough to verify license keys.
- Its one time payment, user can import any text, URL or ebooks and use the reader with read along text highlighting or export the audio as mp3 or m4a (audiobook specific format).
- Currently only supports MacOS with Apple Silicon I was doing Windows too but its making development slow, so I'm pausing that for now. - The most recent feature I added is Global Capture where user can setup some hotkeys to import any text and URL. Text parsing and extracting text is one of the hardest part of this. - Also, just added the a Reader view to website. Its goal is to mimic the app featuers as much as the browser limitations allow. I don't have a free Tier but a 7 days money back gurantee.
I mostly have a dev and engineering background but the most exciting aspect of this marketing and those stuff. Still trying to figure that.
I'd be happy to hear any feedback and ideas.
Edit: Only English at the moment. Adding more languages is in my plan but its very difficult for me since I don't know any other languages. But I think it would be great to add those as well.
I'm curious, what on-device text-to-speech engine did you use?
Well, kind of.
I've been working a ton on some variations and ports of it over the last couple months, but the problem is that I need funding.
So my plan is to setup github sponsors, where for each project people want me to work on, they can donate any amount, and for each $25, I'll work one hour on that project. It'll have a few related projects that all come from a unified vision I have for 90s.dev -- to be a full platform that recreates 90s-era development, from dos and qbasic, to win3 and vb3, not to mention assemblers for those who want it (see my show-hn about hram.dev).
The form stays online for 30 days. To keep the forms online for longer, I will be offering paid plans.
Some minor thoughts: Are the radio buttons meant to have a big outline? Any way to erase the signature? What tech did you use for the frontend?
You can erase existing signatures by overlaying a new signature widget on top of it and specifying a solid color as the background.
The frontend uses React, with Firebase handling most of the backend stuff.
Our target platform is a 40 grams tinywhoop so it’s safe to fly everywhere and makes almost no noise :). A Roomba for mosquitoes!
The main plus compared to traditional systems is that a drone can cover an enormous surface in a short time compared to static systems or man-portable insecticide spraying. Our goal is to be competitive with ITNs against Malaria.
Some links :
https://hackaday.com/2025/03/25/supercon-2024-killing-mosqui...
I know of a Dutch company doing something similar. Focusses on pest detecting/mitigation in greenhouses atm: https://www.pats-drones.com/
This is 100% true.
Kill mosquitoes first. Then go for defense contracts when you can show you have sensors good enough to hunt and kill bugs.
I guarantee solving the bug use case first will put you head and shoulders above all the clueless UAS/cUAS companies out there these days - and there are TONS of them.
It could be a great reconnaissance tool though.
Insect populations worldwide are experiencing significant declines in both abundance and diversity, with several studies reporting reductions ranging from 40% to 75% over recent decades. Estimates suggest that 5%–10% of all insect species have disappeared in the last 150 years, and some global meta-analyses indicate terrestrial insect populations are declining by close to 9% per decade.
> If you don’t want to kill flies, wasps, bees, or other useful pollinators while eradicating the tiny little bloodsuckers that are the drone’s target, you need to be able to not only locate bugs, but discriminate mosquitoes from the others.
> For this, he uses the micro-doppler signatures that the different wing beats of the various insects put out. Wasps have a very wide-band doppler echo – their relatively long and thin wings are moving slower at the roots than at the tips. Flies, on the other hand, have stubbier wings, and emit a tighter echo signal. The mosquito signal is even tighter.
Fascinating engineering! Doesn't seem like it would be possible but it apparently is. There's also more visuals at about 17 minutes in the video embedded in that article, the signatures seem fairly distinct.
Our brains probably have a dedicated cluster of neurons in there somewhere specifically looking for the Mosquito noise.
My friend once came up with a joke idea for a solar powered ransomware drone that would fly to a random roof and jam wifi signal until someone paid it to leave.
It works okay, but they are unable target _all_ water surface. They use drones, they give out these bacteria to people so they put it in the rainwater tanks, etc.
https://en.wikipedia.org/wiki/Bacillus_thuringiensis_israele...
Would add some weight and complexity but if it’s purpose-built it would probably be less stress on the Drone than constantly pulling props off.
Is the name a word play with "torgnole" at all, or does it mean something?
Or is this more like a stand-in for bug spray/smoke?
People who think we can reengineer and shape ecology by eliminating key species are here on the dunning Kruger curve.
Better option, if you really want to fight malaria go fight that directly, leave mosquitoes out of it.
In the case of mosquitos, though, they cause so much suffering, that it would be stupid to not work on eradicating them because of possible negative consequences.
We have to be careful, of course (widespread use of insecticides is a problem), but targeted measures are really unlikely to cause more harm than mosquitos already do.
What bird or bat or other bugs is getting their food needs fulfilled by hunting mosquitos inside your house?
What's the fidelity of the sonar in detection of flying mosquitoes?
Say got example that you can only detect an average sized mosquito from a range of 1cm, then you're in random collision only territory.
At 17th minute he explains that they look at microdoppler signatures and can detect mosquitos through the wing flapping frequency. Pretty cool.
Ans there is also a recent post where someone use a similar device to light the mosquito https://news.ycombinator.com/item?id=44005200 (20 points | May 2025 | 5 comments) and you must give the final bow that sound safer. (Protip: Buy an Electric Fly Swatter)
If they're out of sight and not bothering me, I don't really care. If they're out and possibly annoying and biting me, that's a problem.
Most effort on https://wheretodrink.beer, collecting and cataloging craft beer venues from around the world. No ambition of being exhaustive, but aiming for a curated and substantial list. Since last month I've added a couple of minor things like maps and "where to go next" sections for each venue.
I'm debating whether or not I should add user accounts, and let people maintain venue bucket lists, venue endorsements. Also planning to reach out to the venues and ask if they agree to monthly or quarterly one-click information verification emails from us.
Other projects that receive less love are:
- https://drnk.beer, a small side project offering beer-related linkpages, and @handles for Bluesky (AT Protocol)
- https://misplacy.com, just a dumb and wrong AI landing page for now but was thinking to work towards a drop-in solution for SMBs around lost/found management.
- A platform for helping voluntary associations with repetitive administrative tasks (non-english so not linking. Trying to rank the pain points currently)
- A platform for structuring national soccer club history (initial brain dump idea phase)
- A platform for structuring writing prompts and collaborative fiction writing (initial brain dump / mockups)
For the next month or so I think I need to prioritize what to focus on after summer
Always interesting to see what others are building and doing. So thanks for sharing!
Also Plex for books (https://www.passagebooks.com/) but that has a much bigger scope.
Had a fun week fixing up the application so it’s 100x faster on 5 different axes, and it’s starting to feel really well polished. Also started to move from reagent to preact/signals in a long slow migration hopefully to hsx.
I also moved the critical algorithm logic into an independent Clojure file that is compiled (and tested) with cherry-cljs — I’m hoping to expand this to ClojErl and jank so I can have isomorphic Clojure code running on the browser, BEAM server, and native swift app :D
It’s getting really close to done, I’m using it now to study 18 different languages, including some really minor ones like Maltese, Welsh, and Cantonese (not sure if Cantonese is really a minor language, but definitely low learner resourced) and it’s easy, slick, and surprisingly effective!
Additionally, please try it out before assuming they’re just claims. I’ve been using it daily for 3 months in 18 languages (roughly one from each major language family), and been in pretty constant contact with native speakers.
True I cannot vouche for all 120 languages, and sure there is the occasional error in the lower resource languages. However, I have put in a lot of work to make sure I have a representivie sample, and the errors are currently well within an acceptable range — and I’m working hard to improve them!
Figured you'd appreciate the complexity of having that many languages on a site. jw.org.
Why would you need a UI if the basic way to learn a language is to speak to someone? (I suppose you meant graphical UI.)
Wouldn't a good STT/TTS interface be more appropriate?
Why have a visual interface at all? It’s more convenient to use, is way more engaging, and just better for learning. There is an audio component to the app, and I’m sure more and more audio components will be added, but I would be surprised if in-app audio ever exceeds half of my usage.
For example there is a shadowing exercise that is purely audio based (put in headphones and press play). But what if you want to see what a word means? Or see its gender/case/tense? Mark it as easy or hard, remembered or forgotten? I can look all this information up, plus read a few paragraphs of explanations in less time than I would take me to formulate a question.
It should be fixed now though :) let me know if you have any more issues!
PS the app is very smooth inside, there was just a rendering issue on the home page.
I signed-in and took a look around getting a few "Error rendering home" errors occasionally, fyi.
I was a little surprised at the CC#/subscribe page, since the prices didn't seem to be matching up with the marketing pricing page.
You might want to consider having like a sandbox account with some sample materials so people can feel the power of the app rather than depending on someone subscribing based on the video only.
Cool idea!
The CC page will be updated next week sometime - for now I’m letting people register at the old prices (50%) while I’m plugging all the holes.
And the sandbox is coming! It’ll be on the marketing page, just under the video. It’ll probably take a month to get around to though. That, plus a trial week ($3.99) should hopefully give the user a taste
Also, thinking about resurrecting http://opalang.org/. We'll see if I have the energy to work on that.
It's designed for sync, so rather than fetching you can hook it up to a sync engine (any!) to keep your front end in sync with your backend. It's built on Tanstack Query, making the sync engine optional, and a great path for incremental adoption.
The query engine uses a typescript implementation of differential dataflow to enable incremental computation of the live queries - they are very fast to update. This gives you sub ms fine grade reactivity of complex queries (think sql like joins, group by etc).
Having a lot of fun building it!
https://tanstack.com/db/latest https://github.com/TanStack/db
[0] https://apps.apple.com/us/app/reflect-track-anything/id64638...
Encryption uses Fernet (symmetric), and all decryption happens only at point of access. There's no data retention after viewing or expiration. Optional analytics give visibility without compromising identity. Users can get notified when their shared links was accessed by the recipient, and they can set passwords for enhanced security. Limitations include email-based signups and no end-to-end encryption (yet).
You can check it out at = https://www.closedlinks.com/
You can read the white paper here - https://www.closedlinks.com/white-paper/
Curious as to why you store the data in the database in b64 as opposed to files on disk. What's the reasoning for that? Doesn't it make storage/backups/etc more complicated?
Not an expert myself, I opted for in browser encryption, in chunks, so as to avoid memory limitations (at least in some browsers, not FF yet), and in browser gzip so as to keep file size down and speed things up.
I find your niche quite interesting (journalists, whistleblowers) but given the high stakes of that perhaps an open source or more collaborative approach would be easier to promote.
Another idea I've tried out but not pursued, is some sort of browser extension/addon (I used nwjs, similar to electron), that offers client side encryption for any site (form field really). So you'd only post encrypted stuff to whatever service (email, reddit, hn, whatever) and only anyone with the key would get to read it (well, assuming they have the key and the same extension). Just throwing the idea out there, I'm sure others have thought about something along those lines before. The details to get it right are tricky (UX wise), but for your target audience it may be well worth the extra work.
Keep it up!:)
I opted for database storage to simplify the management of ephemeral data. For a solo project, and as someone still learning, this was a practical way to keep the codebase manageable while focusing on core features like encryption and token-based access control.
However, you should note, in case you missed it in the white paper, that messages and files are deleted upon view (for view-once links) or expiry, whichever comes first. This ensures that the ~33% storage overhead from base64 is temporary, as a file only occupies space until it’s accessed or expires.
That said, you’re absolutely right that base64 encoding adds unneccessay storage overhead and could complicate backups for large files. I also recognize that storing files on disk could be more efficient for large-scale use cases. As (or should I say IF?) the project scales with users, I’ll definitely consider optimizations like disk storage or compression (your gzip idea is great!).
If I run into optimization problems, then it means people are using my product, and that sounds like one of them good problems (Marlo Stanfield's voice).
Your suggestion of in-browser encryption is super compelling, especially to assure users of total privacy. I noted in my white paper that client-side encryption is a future goal to address the limitation of the current server-side encryption, and your approach aligns with that vision.
The browser extension idea is also fascinating, I did not think of that.
I’m open to collaboration (again, as mentioned in my white paper) and would love to discuss ideas for making ClosedLinks more auditable while still keeping it commercially viable/sustainable. I’d be excited to hear more about your project or explore ways we could collaborate on privacy-focused tools.
Thanks again for the encouragement and for sparking this discussion!
On the file storage, I generally recommend going straight to a cloud interface to separate storage backend from the actual storage medium... There are self-hosted options for an S3 compatible backend you manage, or you can use actual S3 or one of several other providers for S3 style storage.
I had this idea (linked in my 3rd most recent comment,) whereby what if I wanted to give someone some crypto via a set of keywords? Maybe you could turn this into some kind of PayPal for crypto.
Perhaps think about a video demo for this site.
Good luck anyway.
So regarding the sender and recipient, let's say I wanted to send you something. A message or a file, but wanted to maintain plausible deniability on if I sent it. I wanted a way of doing this, and the solution I came up with was that the link you receive publicly is not the link you land on to access the message or file. Anyone who lands on the publicly shared link, gets redirected to a new url each time.
But even without the deniability angle, it could be a way of sharing files with one time links. The links work once. And there's password protection, if enabled.
The implementation might not be perfect, but open to ideas, of course.
On, and there's API feature for generating links, and uploading files - for what it might be worth.
So instead of sending someone an image in an email, you send them a ClosedLink, they can view it once, and you avoid having to send them the image as an attachment?
Some screenshots would be nice.
https://www.producthunt.com/products/closedlinks?launch=clos...
Yea, I don't quite like the email idea because it doesn't fit the idea in my head. I want a tool where I can share a ClosedLink with someone without having to ask them their email and getting asked "why" questions. The link should be shareable via any communication channel, and they can be hidden behind passwords so only the intended person can access the link.
Maybe I'm bugging and my implementation/execution is not as perfect as I thought it would be when I started. lol
> Are you expecting a lot of signups? As plan B, maybe aim to get a grant from the Oasis Protocol Foundation - they're all about privacy - and quiz them on what to do next.
Ha! I was hoping to get lots of sign ups, but apparently that has failed. I'd never heard of the Oasis Protocol Foundation, I'll look into it.
Thanks for taking to time to respond. Appreciated.
The idea came to me when we were trying to find ways to manage Terraform secrets , CI vars were a no-go because people sometimes wish to deploy locally for testing stuff, and tools like Vault have honestly been a pain to manage, well, for us at least. So I have been building this tool where the variables are encrypted with `age`, have RBACs around it, and an entire development workflow (run ad-hoc commands, export, templating, etc) that can easily be integrated into any CI/CD alongside local development. We're using this and storing the encrypted secrets in Git now, so everything is version-controlled and can be found in a single place.
Do give it a try. I am open to any questions or suggestions! Interested to know what people think of this. Thanks!
[1]: https://kiln.sh
Appio lets you add mobile widgets and native push notifications to your web app within minutes, without building or maintaining mobile apps, hiring developers, or dealing with app stores. You can try it at: https://demo.appio.so/
If you’re building a web-based product without a mobile app, or just want to try Appio, I’d love to chat! You can reach me directly via https://my.appio.so/ or drop a comment here.
Would love feedback - in open alpha:
www.draftboard.com
GET /hello
|> jq: `{ world: ":)"}`
pipeline getPage =
|> jq: `{ sqlParams: [.params.id | tostring] }`
|> pg: `SELECT * FROM pages WHERE id = $1`
|> jq: `{ team: .data.rows[0] }`
GET /page/:id
|> pipeline: getPage
WIP article that explains more:https://williamcotton.com/articles/introducing-web-pipe
I would love feedback!
The validation piece makes it feel a bit a bit like the Rails mindset for people who work better in FP.
I'd make a could of suggestions for the docs: Maybe a bit more discussion of how we'd test our webpipe code. I see why you've called them 'middlewares' but, maybe the term 'macros' or 'pipeline functions' might avoid confusion with express/connect middlewares
And thanks for the motivation to for figuring out a good way to talk about testing and generally clean up the (very messy) docs.
It’s comments like yours that give someone the drive to continue.
1. Software: An OS that masquerades as simple note taking software.
Goal is to put an end to all the disparate AI bullshit and apps owning our data.
I solved context switching for myself ages ago and now I'm just trying to productize it outside my 3 companies internal usage.
It also solves context switching for AI agents as a byproduct.
2. Ethics: Give Ai and proto-Agi a reason not to kill us all.
An extremely minimal, empirical naturalistic moral framework that is universally binding to all agents so AI won't kill us all. I view the alignment problem as a epistemic moral grounding issue and that the current pseudo utilitarianism isn't cutting it. Divine command, discourse ethics, utilitarianism, deontology they are all insufficient.
edit: fixed url
On parental leave with my third. We are on month 4 so I have (a bit more) free time in the late evenings after we put the older ones to bed.
It is possible to run Playwright inside a Chrome extension, however, it requires the Chrome DevTools Protocol (CDP) to automate a browser which really hurts the user experience, is very slow, and opens security vulnerabilities. Chrome extension APIs can accomplish maybe ~85% of the same functionality as CDP or Webdriver BiDi -- it isn't complete because of security features which shouldn't be bypassed anyhow. For example, instead of calling a function in a content script with 'script.callFunction' with Webdriver BiDi in Playwright, a function is called with chrome.scripting.executeScript(). It will be 2 or 3 more weeks before I post a PoC.
This is following my work using VSCode's core libraries in a Chrome extension exactly as they are used in an Electron app to drive VSCode and Cursor. The important part is VSCode's IPC / RPC which allows all the execution contexts and remote runtimes to communicate with each other. [0] This solves many problems I have had in the past automating browsers with a Chrome extension.
The two important concepts from Puppeteer/Playwright are managing the lifecycle of pages (tabs) and frames and the other is using handles / locators.
There are a lot of limitations using the extension API in any browser instead of CDP / Webdriver BiDi. I'm curious, how would you use this idea?
Atmos Sleep Lamp: A bedside lamp that reduces blue light at night and wakes you up more naturally with light in the morning [1]
[0] https://restfullighting.com/products/bedtime-bulb-v2-preorde...
[1] https://restfullighting.com/products/restful-atmos-preorder
Supports Postgres, MySQL, SQLite, MSSQL, ClickHouse. Includes AI export to generate DDL in any SQL dialect.
17.5k+ GitHub stars, Feedback welcome!
It's an AI video game sprite animator.
Supports deployments of your own apps as well as 15k+ other packages (postgres, airbyte, dagster, etc) via helm charts.
https://github.com/czhu12/canine https://canine.sh
Reason? Got sick of paying for the massive markups on PaaS but missed the simplicity and convenience.
https://www.youtube.com/watch?v=XlolXvBDmRY
Cubic chunks, full lighting engine, opting to be non-deterministic, everything is unsigned integer math except for rotation and rendering, multiplayer is mostly implemented, built to be able to handle heavy simulation.. the foundation work is almost done. Right now it's just a hobby to try to build the best thing I can build. I work on it because it's fun.
https://www.literally.dev/resources/marketing-to-developers-...
Now bootstrapping https://www.minute-master.com - AI formal minute generation for regulated firms primarily in financial services space but also free to use for charities.
It's kinda like Magic Wormhole without typing. It uses iroh for the p2p networking - on both ends, and also in the little web app that you use to scan the QR codes and start the transfer.
It's possible to install with nix and I'm working on other package managers. I'm targeting Linux and Mac.
It has a ticking sound, and the notifications remind you to stay hydrated, stretch and walk. I've used many different Pomodoro and I'm trying to consolidate the features I like the most from each.
Right now it works quite well on Linux and it should work on Mac.
Recently I also made a font for it! https://untested.sonnet.io/notes/433-how-to-make-a-font-that...
I'm also thinking about organising the usage patters, because over the past few years I've collected a few interesting groups: mental health focussed users, script writers, neurospicy folks, bloggers, squirrel enthusiasts. I'm thinking about this here: https://untested.sonnet.io/notes/how-people-use-enso/
Since we added MCP and the use of structured output to "spill" multiple return values into adjecent cells, it is the quickest way I know of to monitor competitors blogs everyday before my 09:00 meeting. And also the quickest way I know of to test new AI models. I have a sheet with SimpleQA, MMLUPro, or GPQA Diamond and testing a new model is a matter of adding a new column. The whole idea is to enable normal people (like, non-techies) to automate manual, repetitive tasks with AI like programmers routinely do.
I built something like that for Google Sheets in early 2024 and now I'm thinking whether I missed an opportunity.
I wouldn’t worry too much about missing out, as you probably very well aware, whatever you choose to work on takes incredible amounts of time and energy to get off the ground. Now you just have more time to out into something else :)
You can at least play some games now though:
https://housepriceguess.com/roundup/v/holiday-destinations/p...
Enjoy. And yes that really was my wife playing one of the games for the first time in the video ;)
Last month I decided to take a subscription of my own for Claude Code to use in my personal time mostly for practice and educational purposes.
So the past few weekends and the occasional week night I've been vibe-coding a game for iOS/MacOS using Swift and SpriteKit.
I have some experience with Swift previously but not at work, so it's extremely experimental for me. However it's been going pretty well. Most of the hang-ups are Xcode configuration issues.
It's interesting to poke Claude a bit and discover what it's actually decent at and awful at.
Gameplay mechanics-wise it's been able to implement things as requested generally without problems.
UI elements like menu screens and such it has been almost completely unable to do no matter what prompt I give to it.
It's safe to say I would never call the codebase professional quality. However, the base game has been implemented well enough to play without bugs and I've been solidly impressed.
The other issue I've had is if I want to change project/target/build settings, Xcode doesn't provide an easy way to do so. You need to poke around the UI to find where these settings and file relationships are set and change them that way.
There's a project file that I believe contains them all but it's not intuitive to modify by hand.
I'll still need to implement some kind of "AI" opponent or hack together P2P networking to demo it though; playing against yourself is fine for testing, but not really how the game is meant to be played.
It's hand coded so far, but I'm hoping AI can be a big lift for churning out the multiple thousands of named special rules, as most of these are very simple (+1 here, reroll there, etc).
Any WH40k players out there? Love to hear your thoughts!
This is the first time I've ever actually released something with a monetization option, so I'll be interested to see where it goes. It's a small enough niche that I think I have several features that genuinely don't exist anyplace else, like the ability to lemmatize even heavily inflected words (a very common stumbling block for learners of Finnish).
A web app would obviously be much easier to monetize, but then I would lose the buttery smooth feel of the search at it currently exists.
Tsemppiä! It's not live yet, but when it is it will be at https://taskusanakirja.com/.
It makes answering customer emails 10x easier.
The magic are training templates which are templates that get suggested (and eventually auto-selected) and personalized by LLM for every reply.
Every reply sent trains it to auto select that training template for future similar customer emails.
The stack is Ruby on Rails and Postgres hosted on DigitalOcean. The LLM currently is Kimi K2 hosted on Groq.
For now, it only has a daily newsletter fully compiled by AI agents without any human intervention. I plan to add public listed companies (semiconductor, energy provider, etc) onto the platform. Already found lots of good data points that can be used by analysts, researchers or observers.
https://www.inclusivecolors.com/
Example with colors from HN to play with (the grays used for links and main body background, orange from the navbar, green from newbie usernames):
https://www.inclusivecolors.com/?style_dictionary=eyJjb2xvci...
The main features are it shows if your colors meet WCAG accessible contrast on a live UI mockup, you get quick and precise control over every color grade in a swatch (via editing HSL curves) instead of these being auto/AI generated, and it helps you create a full palette of tints/shades for each color rather just a handful of colors.
The idea here is to design your tints/shades upfront with accessible contrast in mind so you don't run into problems later. Most brand style guides I see only have around 5 brand colors, and when you need more tints/shades later to implement actual UIs and landing pages, you get into a conflict where you can't find contrasting colors to introduce that match the brand.
I've had interesting feedback about different workflows designers have so far. It's tricky to make a single tool that fits everyones workflow but I might end up with multiple modes e.g. easy but more opinionated, and more freeform but for advanced users.
I admit it has a learning curve at the moment but I'm not sure how simple you can make it without giving up control. I think once you get it though, you'll realise it's much easier to make a custom accessible palette than you thought.
You can use the CSS export in regular CSS projects directly e.g. via `color: var(--red-900)`, or something like `--bs-danger: var(--red-500)` for Bootstrap projects with semantic naming. The same export format works for Tailwind too because since version 4, Tailwind is mostly configured via CSS variables now.
I probably need to make this more obvious, but if all your swatches have the linked/shared lightness option set, you can pick lightnesses where all grade 500 colors contrast against all grade 100 colors, all grade 600 colors contrast against all grade 200 colors etc. so when you're picking colors in CSS, you know by design which colors will contrast without having to go check them.
Thanks, feel free to message me if you want some tips!
Most accessibility/WCAG guides say things like "Tip #1: Choose accessible colors", but they don't go into any detail about how you pick sets of colors that contrast, like text/background/border colors for buttons on different backgrounds, as if it's trivial. It's actually really tricky and can feel impossible in some scenarios until you internalize the basic rules and constraints.
I usually see people saying the opposite, that it's easy to pick accessible colors, when it's often not, especially when you have existing branding to stick to.
I am using cerebras for book translations and verb extraction and all LLM related tasks. For TTS I am using cartesia. I have played around with Elevenlabs and they have slightly natural sounding TTS but their pricing is too steep for this project. Books would cost a couple of hundred euros to process.
Still in beta and learning a lot from each customer we onboard. We're actually going through our own SOC2 assessment in August, which has been... educational. Recently added business continuity and incident tracking features. Trying to build something that's actually helpful rather than just another compliance checkbox tool.
If anyone's interested: humadroid.io or feel free to join our beta waitlist at https://humadroid.io/join-the-humadroid-beta-waitlist/
If anyone's been through the compliance journey, would love to hear what worked (or didn't work) for you!
Idea is to add a lot more NSFW stuff like sexy avatars and mocap animations, cinematic controls, even a marketplace of content and assets.
(Built for fun as I optimized my daily spending to get a year's worth of flights for free and friends wanted it haha)
First game in progress https://reprobate.site
Any feedback is welcome!
I would suggest speeding up the speed that the text renders on the screen. The average person reads 250-300wpm, but you could probably speed this up a bit more and just leave it on the screen long enough to ensure the lower bound is met.
https://github.com/bugthesystem/Flux
Flux is a high-performance message transport library for Rust that implements patterns inspired by LMAX Disruptor and Aeron. It provides lock-free inter-process communication (IPC), UDP transport, and reliable UDP with optimized memory management for applications with low latency requirements.
This week I’ve been working on predicting upcoming paychecks with Nodejs so we can automatically decide how much funds to move into your budgets when you get paid. I pull the past 3 months of transaction data from our Postgres database using Prisma and run some analysis.
People think syncing and delayed transaction data is normal, and I’m working on changing that by having the budgeting built in to the checking account. Along with a high yield savings account, goal envelopes, bill envelopes, etc, joint accounts, etc.
The app is designed for older adults who enjoy reminiscing but struggle to organize their thoughts into a coherent narrative. The goal is to preserve their hard-won insights and pass them down—to family members who may be too busy to ask the right questions now, and to future generations who would otherwise never hear these stories.
I have a working prototype that allows me to test the interview flow, and I’ll soon be sharing it with friends and family for initial feedback. I’m now looking for a designer to collaborate on the next phase.
Design will be a critical part of this app. The way stories are visually presented will be central to the user experience and will likely determine the app’s success. If you’re a designer interested in this kind of work, I’d love to hear from you. Given the text-heavy nature of the app, experience with typography and content-focused design will be especially valuable.
We called it Journalaist, and billed it as a personal ghostwriter. What we found is that it lives or dies by the quality of the interview
You can find me at:
My great grandmother, who lived into my 20s, wrote a 10 page memoir about growing up - life stories, people, places etc... And I found it super interesting - I built a vacation around the places last summer.
I asked her daughter/my grandmother to do the same, but she wasn't interested. And then I've thought about the exercise myself - it's hard to think of things in my life that a future great-grandchild might find interesting. And it's not clear if my great-grandmother's story I find interesting, in contrast with financial hardships I did not face? How do you pick out the interesting from the mundane? What is most interesting about today 100 years from now?
And I can see the potential for core interview questions to help draw it out.
I'm intrigued by the use of level set domains here. I've only encountered those in other type of numerical simulation where the intent is in avoiding surface meshing.
I suppose moving an object in this context is as simple as composing its level set function with a translation and rotation. However, deforming is non trivial, especially local deformations, right?
How do you efficiently resolve collisions? At the scale of an element, it seems to be a simple check of nowhere should both level set functions be negative. But how do you select the elements to check? Do you somehow keep track of only the elements traversed by the objects in a time step, or some other method? I would guess your method should be more efficient than intersecting meshes, is that what you've found?
I'm particularly interested by your mention of high-order boundary parameterizations, what do you mean by that exactly?
Sorry to bombard you with questions, I was intrigued by a combination of things I'd never seen together before!
What I'm saying about level set domains is maybe a bit misleading. I'm not talking about level set methods like you might find in the book by Fedkiw and Osher, per se (but if I found something useful from that literature, I wouldn't hesitate to borrow it...). They are easy to model with and give clean geometry compared to e.g. a B-rep. I'm only interested in messing around with toy and artificial problems at this point, so it isn't much of a limitation. At the same time, given a B-rep, there are a number of ways to get clean level set geometry from them...
By high-order boundary parametrization, I mean: given a implicit surface, how to quickly go to a parametric representation of the boundary which is accurate to many (say, 13+) digits that is relatively space efficient, even in the presence of sharp features in the level set (common for CSG...). This is easy in 2D, harder in 3D...
I guess the subtle point here is that for a finite element simulation, the only thing that really matters is integration. For that, you only need a soup of patches; there's no real need to assemble them into a B-rep or any other kind of mesh. But then if you take a time step, you have to think about converting from parametric back to implicit. I'm trying to figure out if there is some kind of hybrid parametric-implicit data structure that is particularly useful for simulations of this type. Remains to be seen, but there are many fun geometry problems to solve along the way.
Early Access for a new terminal emulator [0] bringing dead text to life. It's my professional dream to evolve our conception of terminals without bringing in the bloat of, say, electron (read: staying native).
>Do you have any new ideas you're thinking about?
I like the thought of dropping you into the terminal right on the browser. It wouldn't be the real thing, but having a toy to play with is superior to dry docs.
Everyone knows reading is the best way to build vocabulary, but many avoid it and turn to flashcards or spaced repetition because long texts can feel overwhelming, and they often have to refer to a dictionary.
This app gives users short, engaging passages focused on comprehension. While reading, users guess word meanings from context and find out whether they got it right by answering a few questions below. I believe this will be helpful for people who haven’t had much success with popular vocabulary learning methods.
I shared it on HN earlier (https://news.ycombinator.com/item?id=44543063), but it didn’t get much attention. If you're interested in novel learning methods or vocabulary, I’d love your feedback.
P.S. Login is required since the app uses LLMs to generate interesting passages. You can register with any non-existent email if privacy is a concern.
Part of another odd project, and testing how long the material holds up. =3
The game is mostly done, so I'm now focused on tooling to make it easier for me to craft each week's puzzle. I'm solving some interesting graph and optimizations problems
After working on and using many MCP servers, I hit couple of issues multiple times:
* Do I configure 2 MCP servers of same type for 2 different API Keys or do I manually update configurations all the time? (e.g. production and development environments)
* when I have too many tools enabled, I noticed that either I am hitting context limit too quickly or LLM is hallucinating when choosing a right tool
* Some MCP servers expose a lot of tools, I want to disable some of them forever, instead of doing configuration per AI assistant (first for Claude, then Cursor and so on)
* Most MCP servers are hosted by third parties, as a privacy conscious person, I do not want to share my credentials with third parties.
And I am building Aiko - AI tools marketplace: https://getaiko.app
NOTE: Gmail and Calendar apps are currently under CASA Tier 2 security assessment, hence not published to production. But you can see demo usage here: https://www.youtube.com/watch?v=ZgEy6Y1kfn4
Firstly a DevManual - for “any” software team/IT dept - how to think about the philosophy, history and practise of basically everything - release management, backup and recovery or IAM and security and marketing-by-engineering or CSS
It’s kind of “this much I know” and a working docker based OSS “software team in a box”
And the second one is really expanding on the philosophy - how software is changing companies and how democracy works with software
https://github.com/turbolytics/sql-flow
Building a company around a tool is hard. There's been some interest but streaming is kind of commoditized.
I'm taking everything I learned building it and working on a customer-facing security product, more to come on that :)
I also used the tool to generate an Adult Chess improvers FIDE rank list for all federations around the world. Here are the July 2025 rankings though it still needs major improvements in filtering - https://chess-ranking.pages.dev
------------------
Another idea that I have been working on for sometime is connecting my Gmail which is a source of truth for all financial, travel, personal related stuff to a LLM that can do isolated code execution to generate beautiful infographics, charts, etc. on my travels, spending patterns. The idea is to do local processing on my emails while generating the actual queries blindly using a powerful remote LLM by only providing a schema and an emails 'fingerprint' kind of file that gives the LLM a sense of what country, region, interests we might be talking about without actually transmitting personal data. The level of privacy of the 'fingerprint' vs the quality of queries generated is something I have been very confused with.
- Live PDF preview
- 100% client-side
- No sign-up required
- Includes a Stripe-style invoice template
- Built with modern web tech – simple to self-host or fork
Repo: https://github.com/VladSez/easy-invoice-pdf
Demo: https://easyinvoicepdf.com
Would love feedback, contributions, or ideas for other templates/features!
Only minor gripe is that the "support my work" popup is a bit aggressive.
The long URL is a compromise that lets the service work without requiring sign-ups or storing user data.
I’ll definitely try to make the “support my work” popup less aggressive.
"total": 18,
"vatTableSummaryIsVisible": true,
"paymentMethod": "wire transfer",
"paymentMethodFieldIsVisible": true,
"paymentDue": "2025-08-27",
"stripePayOnlineUrl": "https://example.com",
"notes": "Reverse charge",
"notesFieldIsVisible": true,
"personAuthorizedToReceiveFieldIsVisible": false,
...
to this "1": 18,
"2": 1,
"3": "wire transfer",
"4": 0,
"5": "2025-08-27",
"6": "https://example.com",
"7": "Reverse charge",
"8": 1,
"9": 0,
...
Cut from ~1400 to ~700 chars in my testing, which is still a lot, so idk if you think it would be worth the extra code.https://github.com/scallyw4g/bonsai
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
I'm 97% certain this is because the faster code leads to more page thrashing in the mmap-based index readers. I'm gonna have to implement my own buffer pool and manage my reads directly like that vexatious paper[1] said all along.
[1] https://db.cs.cmu.edu/papers/2022/cidr2022-p13-crotty.pdf
You make it sound like I was trying to troll everyone when we wrote that paper. We were warning you.
Is it being cached for future queries or are you just talking about putting it in memory to perform the computation for a query?
Caching will likely be fairly tuned toward the operation itself, since it's not a general-purpose DBMS and I can fairly accurately predict which pages will likely be useful to cache or when read-ahead is likely to be fruitful based on the operation being performed.
For keyword-document mappings some LRU cache scheme is likely a good fit, when reading a list of documents readahead is good (and I can inform the pool of how far to read ahead), when intersecting document lists I can also generally predict when pages are likely to be re-read or needed in the future based on the position in the tree.
Will definitely need a fair bit of tuning but overall the problem is greatly simplified by revolving around very specific types of access patterns.
Edit: I should specify they shard the corpus by document so there isn't a replica with the entire term dict on it.
Wow why is that? Do you use a vector index primarily?
It’s been a journey but getting close to launching our first version to pilot customers in August. We use an enormous amounts of AI tokens every month to extract data not possible with any traditional player in this media monitoring space. Benchmarking competitors, tracking impactful discussions, and receiving actionable brand insights.
If you are currently using one of the big media monitoring companies, I’d love to chat!
https://github.com/rush86999/atom
Check it out.
Soon approaching a 1.0 release for sanctum once I get my brain out of vacation mode and into hacking mode again. A lot has happened this year and I am excited.
I will be talking about how sanctum and its cathedrals work at sec-t 2025 [2] so in full swing working on the demos and presentation.
check it out: https://mixpeek.com
Most recently, I've been incorporating a lot of improved UX design. The app has always used a playlist metaphor, i.e. your database of flashcards is your library and you can sort them in different ways and then hit Play to start reviewing. Within the review session itself, you go through the cards in the playlist in small batches so that it's less overwhelming, among other reasons. After every batch, the app returns to an overview screen so you can see what you've just reviewed so far in the session.
The challenge has been designing this overview screen so it's clear where you are in the playlist without making it overwhelming. I finally came up with a good design this week, which I was quite happy with: https://mastodon.social/@allenu/114921335089371494
I've been pleasantly surprised at how much of an improvement this new UI has made on how the product feels. The old UI only showed you the history of cards you've reviewed in the session and highlighted the most recent batch of cards. This new one shows you the full playlist, but redacts the contents of the playlist ahead, so you immediately get a sense of how much left there is to do, but without being shown what is in the contents of the cards. Interestingly, this has the effect of making you want to see what is in those cards, i.e. to keep reviewing!
I don't have any testers on this new version at the moment, but I'm considering having some beta testing once I get closer to release. No videos either, but I'm planning on writing some blog posts going over some of the UI flows and new features in the app to help with promotion. Once I polish the lesson UI a little bit, I'll probably post a video on YouTube of it too.
It calculates optimal ways to load boxes into trucks or containers, considering stacking rules, fragility, and real-world constraints. You can drag boxes like 3D Tetris or upload photos to auto-estimate item dimensions. Recently added: batch-wise guided loading for warehouse use cases.
We're at ~$400 MRR and just opened up a 14-day free trial. Feedback, trials, and intros to logistics folks welcome.
Would love feedback, contributions, or ideas for other templates/features!
Absorbing low (male voice; 80Hz - 300Hz, not including overtones) frequencies normally takes a fair bit of dampening material, unless something like a Helmholtz resonator [1] is used. The paper shows that a ~100x100x12mm 3D printed Helmholtz resonator may entirely absorb 125.8Hz (in an extremely narrow band). I'm uncertain about transmission losses (i.e. volume of the frequency perceived behind the material).
So far, I have created/vibe-coded a script to take the inputs: frequency and tile dimension (it's square). The output is a 3D object (.stl) which can be printed.
Today I tested my 3D model, which roughly resembles the model in the paper (1mm roof & floor as opposed to 0.2mm, because of printing difficulties), by using a DIY'D impedance tube and publicly available software [2]. The print was meant to be tuned at 125Hz, but results showed 131Hz and absorption factor of ~0.42 (lower number as opposed to 1.0 may be due to inexperience with all of this; it may be due to an imperfect test setup).
My impedance tube is made from 96mm (inner) diameter PVC tube, a Visaton KT 100 V 4 Ohms speaker, an amplifier, Motu M2 audio interface, 2 Behringer ECM8000 measurement mics and some 3D printed adapters (to hold the speaker and sample).
Nothing to concretely publicise or share so far, but am thoroughly enjoying the process of digging into a field (acoustics) completely new to me, solely out necessity and/or frustration in the workplace.
Should anyone be interested, I will share my project with HN once it has progressed to where I have something written up or worth sharing.
[0] http://dx.doi.org/10.1063/1.4941338
Unlike its competitors, it uses proven research and techniques to measure the issues, as well as the improvements.
https://groundme.app/what-is-ground-me
Test users and early adapters are very welcome
Programming for me has become a lot more fun because of Claude Code. I get to spend more time planning and researching.
I have been working on https://codient.dev to be able to run Claude Code agents in the background without setting up a local IDE!
Animal colony management is largely managed in Excel sheets, with no integrations to related systems or hardware sensing. We're working on the spreadsheet problem first, so that biomedical researchers can share information about their colonies with other researchers at their institutions, and explore the lines that other labs have. This opens up collaboration options and makes it much easier for the research community to find out what mouse lines other labs have (and may borrow for their own experiments).
Currently in closed beta at Harvard
it's counter intuitive, but if even a small portion of animal rights money went into tech solutions like this, they could have an orders of magnitude greater impact on welfare.
The tech community is doing just fine in terms of money at the moment.
It's a fun project, because I have to do hardware for it, and that's outside of my current skill set.
Thanks to Claude, it works way better than I should've managed solo: https://www.procuratorai.com
Free signup to test: https://my.procuratorai.com/login (no help/intro yet, and I'm paying for tokens so not advertising it widely...)
Homepage is basically a one-shot Claude build using Nunjucks on Netlify (first time with both).
(Subscribe button is broken - still working on that...)
Just confirm your email address and you are good to go. Any feedback super welcome (marco / irg@procuratorai.com). Irg will also gladly give you an onboarding session, if needed. Just get in touch.
Here is what the bot said
This helps me turn inbox noise into useful briefings for busy clients. Cool use of AI—curious what others think.
It started off because I wanted to see if I could print QR codes on a piece of paper and use these to detect people crossing a lap if finish line.
That proved more difficult than I thought, the QR codes were not easy to scan from far away and while moving. It is still in alpha stage.
What does work is a simple manual mode for people to use at their races.
Can be described as Astroneer-like setting, Teardown voxel physics, in a Valheim-like online multiplayer survival game.
Game isn't really announced yet but I've shown some videos of the tech: https://x.com/Alientrap/status/1909316208563732866 (On Youtube: https://www.youtube.com/watch?v=ZWISaUmvit4 ) https://x.com/Alientrap/status/1918024969939808654
Given the online/multiplayer aspect how difficult has the network portion been?
It’s like Super Mario Sunshine X Deep Rock Galactic / RedFaction / Minecraft.
The C/C++ library can be easily embedded in host applications or plugins. It even runs on embedded devices, such as the ESP32. In addition, the project contains a Pure Data external and SuperCollider extension. There is also a third-party Max/MSP external: https://github.com/ddgg-el/aoo-for-max
For more background information, check out this article: https://www.soundingfuture.com/en/article/aoo-low-latency-pe... https://aoo.iem.sh/
The project is still in beta stage, but I hope to make a final release this summer.
Tomorrow, I'll start a brand new project, also related to the real estate industry and society.
Now connecting all the dots and building a backend for mobile apps. It’s already live https://calljmp.com
Fully powered by Cloudflare, unbeatable pricing, rls and app attestation, raw SQLite queries, and tons more.
Looking for early feedback and adaptors.
Intro video: https://www.youtube.com/watch?v=X_eKc6c5tDw
Also, I've been working on this for a while under a beta version called Self-Assembly, which was a bit more fashion oriented. New i-t-s-e rebranding is to be more Lego like. Here's the old website: www.self-assembly.fi
Now, what about footwear, I'm thinking. Stitched soles + uppers are so much more durable. If you could cut a sole to a person's foot size, then they could construct the upper to their best-fit, best-colour-combo design.
Your stuff takes it to a whole other level. It makes me imagine a constructible footwear that can morph from a flip-flop (band across foot) all the way up to an all-weather knee-length boot.
No doubt you have already imagined how experimentation with materials and sealing/binding techniques could yield a design system that everyone can make their own; from the multi-spectral La Sape, to baby wear for those fast-growth years, to field equipment for the extreme adventurer.
(edit: fixed broken sentence)
I haven't quite figured out how to apply this thinking to seam sealing though. The current garments that have taped / waterproof seams are locked into the taped configuration. For tents I have some geometrically water repellent structures (that guide the water away as long as it's coming from above), but for the soles of footwear, you might want something 100% sealed.
what do you need to scale this?
Why? SAP holds the most important data for companies that use it, but it's notoriously difficult to replicate this data consistently into a data analytics platform (think Snowflake, Redshift, etc...).
Couple of companies specialize in the SAP replication, but it's hard to validate the correctness of the replicated data, because:
- the SAP data is changing continuously and rapidly
- there are hundreds of tables and TBs of data
Usually it's the consumers of data downstream who notice that the data just "doesn't feel right".
Tracelake adds a validation layer on top of the SAP to X replication, which periodically compares the data between source and target and informs you about any missing / incorrect data, so you can tackle data quality issues proactively.
I'm starting with React Native to see if anyone actually ends up using it, and will go from there
[1] https://newspeaklanguage.org [2] https://multisynq.io https://chalculator.com/primordialsoup.html?snapshot=Amplefo...
[0] https://blog.walledgarden.ai/2025-05-27/wabbit-s2-mcp-openai...
The guiding principles are to create a fun, positive, safe space for kids and families to socialize and interact as well as empower kids to explore and understand technology as a creative tool and not just as something to consume content.
I suspect the lack of privacy is because the target audience is “kids” not “teens”. When my kids first discovered group chat in iMessage with their cousins it was fun for literally 30 minutes before it was tears and abuse - which was a really instructive lesson for me.
At that (primary school) age parents would almost universally know the parents of your kid’s out-of-school playmates - if only because someone tends to have duty of care at any time and who is where with whom needs to be figured out.
The feature set seems sound and frankly welcome and overdue to me!
So for now, the social dynamic in the app is for parents to connect first. Once connected, their kids can choose to connect (facebook messenger kids uses this same process I think).
When I talk to less tech-savvy parents in my community, I think many feel quite helpless and unsure how to navigate a lot of this. Consuming youtube kids videos on an iPad is one option, or outlawing screentime entirely is another. Kids want real stuff that they are in control of. I want to build age appropriate versions of this kind of stuff... with the appropriate guards and oversight in place, keeping parents in the drivers seat.
We live far away from family, and the idea of having a way for her to communicate with cousins and grandparents became the focus. As well as other kids in town. So I thought about a social version of the experiments I'd been playing with.
I'm inspired by Seymour Papert's thinking, about kids using technology to learn math and logic... living in "mathland" so to speak. But I'm also thinking about positive alternatives to the default social network interactions that are available for kids and families now.
Long term I would love to build a platform that lets kids explore technology and build collaborative spaces.
Keeping parents in the loop of what is going on is important, but balancing that correctly can be challenging, I don't want a "big mother is watching" kind of app, but I think its appropriate for parents to know what their kids are doing and looking at and talking to, especially at primary school age (my daughter is currently 8). What is needed and appropriate always changes.
dedicated built for ai agents first, humans are welcome, too
Since WGU just started doing masters degrees in CS, I decided it would be a way to kill large amounts of time while getting at least a little out of it, so I registered for it.
I've been a professional software person for like fourteen years, so I was able to knock it out extremely quickly due to WGU's competency based stuff, so now I finally am able to put "MS" after my name.
I'm actually planning on doing a second masters from a slightly more prestigious university with a more theory-heavy degree [1], but it's nice to at least have an official graduate degree now. Hopefully it helps me find work a bit quicker, and if nothing else it's just kind of fun to pile up degrees.
- 100% remote
- 100% self-paced
- fairly cheap
and it looks like Open University is the best option right now? Did you find any better option?
University of Texas has one that looked pretty ok, but it was kind of expensive for a non-Texas resident.
University of Western Florida has one for “Mathematical Sciences”, which more or less fits, and it’s not even that expensive, but I think that one is synchronous.
That said, if you feel like you're organized enough to pull it off, I do recommend looking into University of York. It's a very good school.
Why did you complete WGU masters in computer science after already having a PhD in computer science?
I wanted a graduate degree in CS, and I figured I could get the WGU one quickly.
Feel free to email me (address in profile) if you want to talk about it.
WGU charges "per term", and you can take as many courses as you'd like per term, as long as you can complete them.
I did have a bit of fun learning about some of the more "platform as a service" parts of AWS, that has been something I've been putting off at learning more about for awhile, but overall I don't feel like I learned a ton.
I registered for another masters degree from a different online school to start in October that I think I'll enjoy and learn more from.
And I've been vibe coding some maths educational tools and games for (my) 6yo: https://rupertlinacre.com/
We saw an increase in demand for individuals willing to build their own HackerNews, Product Hunt, or Reddit-like Community.
So, we built a SaaS platform for them, where they can launch their own community with their custom domain in just a few seconds.
Demo community - https://kocial.co
Get your own community here - https://kocial.net
This is a wheel I see people reinventing all the time, often for use in SaaS applications. The implementations are often underwhelming: function support is limited, documentation is sparse to non-existent and errors are typically only communicated at runtime -- if at all. Formula editors usually lack autocomplete, making them frustrating to use.
I've spent years solving all these problems (with a statically-typed language), and I'd love for others to benefit from the work. I have extracted the formula engine from our app compiler, so the library is nearly complete. The runtime part (evaluating formulas) has been rewritten in TypeScript. Next, I'll build a service around it to validate, compile and evaluate formulas -- which should be fun.
I'm planning to do a Show HN once I have a preview up and running.
So far I've cataloged about 1500 advertisements out of the ~100,000 in my possession. Of which that is probably only 0.1% of all the major material out there. It's going to take a long time! I'm going at a rate of about 10,000/year. I'm going to have to speed this up :) But I've gotten the process to catalog a full magazine down from a week to a few hours.
I'm thinking of ways to support the archive. I am doing original art from the ads I may sell, or sell really nice copies of rare ads.
There is a much larger database of small ads that I am not publishing on the site, mostly because they add a lot of clutter. But to a researcher they may be valuable. Eventually I want to make the backend database available to people like your husband. Something like newspapers.com makes a lot of sense, thanks for the idea!
The Stanford Research into the Impact of Tobacco Advertising (SRITA; https://tobacco.stanford.edu/) collection currently contains 62,553 tobacco advertisements.
So I will release my new data grid component based on my own toolkit, and if people want tweaks or "add these features", I will demonstrate them the toolkit.
The model is mathematically proven to converge to π, symbolizing natural harmony. So people can choose it not as a dream, but as a rational system for real well-being.
GitHub: https://github.com/contribution-protocol/contribution-protoc...
Nothing much to show other than one client, but I'm on the cusp of charging them monthly vs getting paid by the hour.
This is one of the most important performance features in a JS engine, as without shapes property lookups would be terribly slow. I'm looking forward to getting this working.
Easiest way to explain it is something like D&DBeyond but for indie games.
Link if anyone is interested: https://raze.cloud/
Getting together a very simple server (PHP), and very simple clients (UIKit and SwiftUI), and will publish a blog series on it (sort of like this[0]), once I get more used to it.
I need to really get comfortable with it before I do that, though.
It uses whisper.cpp under the hood and should be accelerated on most devices using the Vulkan backend
I added a issue (and comment) for this on the GitHub repo: https://github.com/cjpais/Handy/issues/47
https://karabiner-elements.pqrs.org/
There you can map (whatever key) to (whatever other key). E.g. I have right command mapped to F20, available to all other apps .
Not planning to do a lot during the promised Berlin summer though
Most people I know are using group texts for this, but I find that unsatisfying because my wife and I want to share stuff with ~20 people, but we don't want to be blasting all of them with texts all the time, or put those 20 people in a group text with each other. We wanted something pub/sub, but with the privacy of E2EE chat apps, and so easy to use our parents will use it.
It's a React app running on Cloudflare Workers, and there's an iOS app in the works using Capacitor; the E2EE is built on OPAQUE. There's a landing page/signup at freefollow.org if you'd like to learn more. I'm working on some demo videos.
https://github.com/search?q=repo%3AWebAssembly%2Fwasi-libc+s...
This is something I've been kicking tires on since my time at $BIGCORP; JSON without the bloat, Protobufs without the ceremony. I've drawn a lot of inspiration from MsgPack, CBOR, and Ion 1.1. Big emphasis on a tight set of core primitives, low-cost extensions, storing reused values/schemas, optional pre-negotiation, etc. That said, I've now been spending time trying to study the performance angle to make sure the design doesn't have a negative impact on encoding/decoding performance before committing to the implementation.
Regrettably nothing much to show (at least yet), but hopefully if nothing else it will become my go-to format for other personal projects that I work on.
After tinkering with game technology for years now, I'm pleased I've finally managed to use all that knowledge to create something (soon to be) releasable
https://store.steampowered.com/app/3407760/The_Night_of_the_...
With a strong rating weight system that can avoid (some) of the pitfalls of community ratings.
Right now videos must be added to be searchable, to comply with YouTube API rules. I'd hope that over time, with enough usage, the repository could contain many categories of highly curated content. (eg. Documentaries) that someone could find without having to browse various communities and opinions to get lists.
edit: scratch that. wrong url. forgot i dont have www mapped: https://ytdb.io
- Implementing a Convolutional Neural Network in pure Python to learn how it works.
- A Open AI Whisper to an embedding model pipeline to transcribe and summarize podcasts.
Only put the last three games of this past season so far, but I will probably add more each day (re-runs are still on until the new season starts in September).
https://www.virtualhospitalsafrica.org/
While medical records systems in much of the developed world are still shared via fax, we think there's an opportunity to leapfrog existing systems and have a cloud-based source of truth.
It's a tool to help teachers detect student assignments that have been written by AI. Unlike other solutions out there, it's an entire web-based text editor that analyses not just the final assignment, but all the keystrokes used during the writing process.
My theory is that analysing the final text only is a futile struggle - billions are being pumped into making LLM text look more human, so trying to make an assessment off final text alone is guess work at best.
I'm curious what folks think! Especially teachers, devs, and anyone navigating this space...
Not that your software is going to be useless. But as long as there is an incentive to cheat, new and better tools that facilitate cheating will crop up. Something else should change.
For both a keystroke based AI detector, and software designed to mimic human keystroke patterns, performance will be determined by the size of the dataset they have of genuine human keystroke patterns. The detector has an inherent leg-up in this, because it's constantly collecting more data through the use of the tool, whereas the mimic software doesn't have any built in loop to collect those inputs.
My first pass approximation is to make the assessment of whether the essay is AI generated or not accessible only to teachers. I may need to also rate-limit the checks, so people can't brute force it to gather data on what passess.
I'm focusing on Chinese (Mandarin) right now, because that's what I've been learning, and the language learning community on reddit likes it too. But other languages are also available.
Link: https://lingolingo.app
Right now it works on both CPU and GPU (both AMD and NVIDIA) and is capable of running LLMs like Qwen, I'm currently implementing a native profiler to trace CPU and GPU kernels and then I'll work on speed. Goal is to be competitive with PyTorch eager by the end of the year.
Source code: https://github.com/nirw4nna/dsc
My original HN post: https://news.ycombinator.com/item?id=44310678
It's never easy for me to compile this monster
Static-PIE binary with minimum options is a whopping 15M
It just keeps growing
This is probably third (or fourth) incarnation of the app and I like writing a web apps with Hotwire more and more. Especially leaning more on <turbo-stream> features removes frontend complexity compared previous versions. Highly recommended, both Hotwire and rewriting your apps!
With Claude Code I really like being able to multi-task but right now it's a bit like a Tesla on autopilot needing your hands still on the wheel. With TalkiTo I can do housework/go out for lunch and keep it on the right track remotely.
It's an environment for open-ended learning with LLMs. Something like a personalized, generative Wikipedia. Has generated courses, documents, exams, flashcards, maps and more!
Each document links to more documents, which are all stored in a graph you grow over time (very Obsidian-esque).
App is alpha quality but working: https://github.com/davidventura/firefox-translator
Trying to figure out F-Droid publishing..
It's been tough to find work, so I figured I'd revisit an old SaaS idea. I worked for a home inspector in the past and saw a need for better (cheaper/faster/easier) report software.
Even if the business side flops, I'd still be satisfied with the experience. I've learned a lot about new tools like WASM and web components. I also like the UX challenge of designing for inspectors filling out reports on their phones.
We focus on people with relationship issues, and so far it's been deeply fulfilling. So many people have written in about how this has helped them heal.
Launched with React Native about 8 weeks ago, and continuing to grow fast. This is a niche space with lots of potential over the next few years I think.
Just submitted an update to help people compare their unique relationship needs to others which is so cool.
Using AI for auto subtitles and actor matching. Will build some auto deploy fragment to social media as well. I think these short fragments will do well op TikTok.
Git hosting for async teams that supports versioned patches and patch stacks instead of pull requests. All done using the standard git SSH protocol, so no git-send-email needed.
Overall, it is ending up being the most amusing thing I was working on.
edit: Very much for personal use. I currently have no intention of sharing it anywhere.
This month I'm improving CI/CD for e2e testing across Windows, macOS, Linux and Android. Also adding support for unlocking password-protected PDFs and Word docs and improving OCR. OCR runs in the background and leverages native OS OCR where available and a pure LSTM Rust implementation elsewhere. Generally improving the word processor and looking for speedups. Adding a cross-platform spellchecker leveraging native where possible, too.
Play with it online: https://tritium.legal/preview
Download for free: https://tritium.legal/download
I'm not the person you responded to but there are various diff programmes out there that lawyers use. I think they tend to be proprietary formats though. Lawyers pretty much all work in MS Word so any comparison software needs to work with that.
If you type some text into one of the documents in the web preview, click the triangle and click the name of that document, you'll get a redline. That's the current industry-standard diff format. Redlines don't standardize around any kind of metadata format, though, so parsing them (unlike a diff) is non-trivial. There's an opportunity for improvement.
As mentioned elsewhere, transactional lawyers are corporate lawyers who work on deals (think drafting corporate documents, M&A or IPOs) as opposed to litigators who go to court and argue cases.
I built a simple "gpt wrapper" focused on legal - in the process of fine-tuning an llm, I've noticed that Gemini / Google scraped the hell out of a certain legal forum (phbb board) in my country. After that I've started focusing on court legal cases entries (since there's a public website for that) and thinking a bit about what would a diff in a court case ideally look like, and it's an interesting problem.
Your product reminds me a bit of quantus.finance (also here on HN) even though the space is not really related, but it caters to a business area in an interesting way. What are you planning on doing next (on a high level)?
But this is really cool. Its definitely a problem they have.
Some times you need to write a whitepaper, some marketing materials or something that a general product like Word is more suited for. Tritium is however aiming to replace Word as the transactional lawyer's go-to desktop application for the most common workflow.
Full fleet/driver management platform for private transportation companies (busses, limos, white glove taxis, etc)
We just released our first B2C model, check it out at https://gouach.com :)
See https://github.com/chazapp/o11y.
These last few days I have decided to try getting Kubernetes Gateway API to work, using the implementation of Istio. I have written an `auth` microservices which provides JWTs and published a public JWKS endpoint, and intend to have the API gateway validate tokens and claims to allow access to other services. The plan being to write API services without any knowledge of the authentication systems that happen upstream. If a request reaches them, it's that it had been validated before !
- an AI Web Agent that autonomously completes tasks, creates datasets from web research, and integrates any APIs/MCPs – with just prompting and in your browser!
https://chromewebstore.google.com/detail/mkmcjgbighammmaoohb...
[1]: We work at clioapp.ai w a paragraph more detail under products
Actually I have finished the CPU fetch-decode-execute cycle, so I'm implementing the CPU instructions and looking forward to implement a basic version of the MBC and get cycle accurate.
Beyond that, I've spent most of the weekend working on some "test harness" code for doing AI research. You all may have seen me mention XMPP a few times over the last year or so and if so, you have have rightly wondered "What does XMPP have to do with anything?"
Good question. The short answer is "nothing, in and of itself."
That is, there's nothing in particular about XMPP that has anything to do with AI. I'm just using XMPP as a convenient interface to interact with my AI experiments. The thing is, most of this code was written in very much an "exploratory programming" style (eg, "vibe coding before vibe coding was a thing and done without an LLM"). As such, the architecture and structure of the code is kinda crap and it's hard to extend, reuse, modify, etc. There's too much "XMPP stuff" tightly coupled to my "Blackboard" system[1] and nothing was written to use dependency injection and so on.
Soooooo... I've spent a bunch of time over the weekend re-working that stuff to make my test harness much more useful. Now, all the "XMPP stuff" is contained to a single deployable unit, and the Blackboard stuff is likewise properly designed to allow making all the components Spring managed beans and wired together in a Spring Boot application. And that in turn exposes it's interface as a simple REST API. One thing I'm debating now is if I want to try and coerce this into fitting the OpenAI API model, and then adopt the OpenAI API for my backends[2]. Still debating with myself on that point.
Anyway, with this stuff done, it will be easy to switch out the AI backend components, run parallel tests, and do other nifty things. One thing I'll probably do is integrate Apache Camel into the XMPP receiver component to support complex message routing logic where desired.
I also finally created a Dockerized build for all of this stuff and a docker compose file, so now I can just run "docker compose up" and have a running system in a few seconds. And since everything is built as a Docker image now, if I want to move this to K8S or something in the future that becomes less of a slog.
All in all, I have gotten quite a bit done the last couple of days. I attribute a lot of this to the success of my eye procedure on Thursday. Now that I can see again, and am not experiencing near constant severe levels of eye strain and fatigue, it's a LOT easier to get stuff done!
[1]: https://en.wikipedia.org/wiki/Blackboard_system
[2]: As an aside, I say "coerce" because what I'm doing is not fundamentally based on LLM's or GenAI in general. Most of this work is either purely symbolic AI, or neuro-symbolic hybrid stuff at present. That said, I do allow for the possibility of using an LLM in places, especially for the "language" part. That is, if my system does a bunch of computation and creates an "answer" as a bunch of RDF triples or something, I can then take that and feed it to an LLM and say "translate this into conventional English prose" or whatever. I'm not an absolutist about any particular approach.
Basically a productivity tool for making sense of reality and living your best life.
I love making something truly valuable and it's also a crash course on AI product/app development. Absolutely having the time of my life and am so grateful to be on this path!
Flutter with iOS and Android this summer. Desktop and Apple Watch soon.
A website down-time detector because I think I can make money off it and learn a few things so I can later launch a grown-up SaaS.
A replacement for MS Notepad but with Markdown support. (I know Notepad sort of added this but it isn't great.) It's the tool I want to have when I say, "I like the way I can edit things in Notion and Obsidian but 95% of the functionality of those apps feel like bloat for my use case".
Besides the simple "get token and send to a thing that uses it to authorize a request" there's a couple of things we've built/are building on top:
service-chains: for a given resource, you can configure the token so that it needs to be signed by notable components along the path of the request, and at each step along the path check that it was signed by expected components up to that point. the thinking is this could really cut down on lateral movement in a system
multi-party authorization: for a given resource, you can configure N authorization services that also need to sign the token based on their policy. the token only authorizes if all parties have signed it. this could be useful for managing capabilities of software deployed into customer environments or perhaps for b2b agents to get signoff from both b's for doing an action
The core of the project is a “Spilled Coffee Principle,” which basically says that if I spill coffee on my laptop, I should be back up in an afternoon. Every configuration change is codified into scripts, not a one‑off terminal command. Setup scripts create directories, handle symlinks, document dependencies and generally remove the “Brent the bottleneck hero” problem.
Beyond that, the repo lives inside a P.P.V system (Pillars, Pipelines, Vaults) where dotfiles are one of the pillars. This structure separates foundational configs from automation pipelines and secure vaults. It forces me to think at the system level: how do all of my tools fit together, where do secrets live, and how can I onboard a new machine (or person) with a single `git clone && ./setup.sh`?
What’s really interesting is the mindset shift this has caused. I’ve been experimenting with what I call the OSE (“Outside and Slightly Elevated”) principle: moving from micro‑level, line‑by‑line coding to a macro‑level role where you orchestrate AI agents. At the micro level you’re navigating files in an editor and debugging sequentially; at the macro level you’re using tmux + git worktrees + AI coding assistants to run multiple tasks in parallel. Instead of `1 developer × 1 task = linear productivity`, you get `1 developer × N tasks × parallel execution`, which has obvious 100×–1000× potential. This OSE approach forces me to design workflows, delegate implementation to agents, and focus on the “why” and “what” instead of the “how”.
The result is that my dotfiles aren’t just about aliases anymore; they’re a platform that bootstraps AI‑assisted development, enforces good practices, and keeps me thinking about the bigger picture rather than getting lost tweaking my prompt or editor colours. I’d love to hear how others are approaching the macro vs. micro balance in their own setups.
To that end, each tool has its own subdirectory in my dotfiles repo ( https://github.com/bbkane/dotfiles/ ), and I add READMEs to each subdirectory explaining what dependencies are necessary for this tool, what keyboard shortcuts this tool uses, etc.
This approach has been pretty resilient against my changing needs, changing operating systems, and changing tool versions; even if doesn't optimize for a single invocation of ./setup.sh
I am building a Pinterest clone that filters out AI generated imagery[1]. It is built on top of Bluesky so gets the benefit of its large library of well alt-text'd images, which aids with search.
Also working on a new kind of social media, where every user is a verified human[2]. The idea is to avoid the problems that sock puppet accounts controlled by the rich and powerful can have on our society. Again, I am starting with Bluesky as a target demographic and have already had some adoption.
We work with a single restaurant each month to create a 10-20+ course all inclusive price fixe menu. The food is served family style and is authentic to the region we are hosting. We typically host the dinners on a Tues or Wed when the restaurants in our region aren’t too busy and could use the extra business.
Here’s the 2024 update (I haven’t run the year to date cumulative numbers yet):
* Grew to over 900 members
* Hosting 2 seatings per month
* Served 1,300 guests
* Generated $140k revenue
I have a couple of family members and friends who are looking to buy businesses (separately), and it's been much more time-consuming than you'd expect just to browse through listings to determine if they're relevant to you or not.
The platforms seem to mostly follow the same format as real estate listings (as the brokers seemingly rely on the same software/data formats), with one big blob of freeform text that contains the various information that you'd ideally just be reading at a glance.
Add to the fact that there are over 15 "business for sale" type platforms in Australia where they have a minimum of 1,000 listings and at least 10 platforms with between 100-1,000 listings, you can easily burn hours looking through them individually.
I'm currently covering 12 of the top 15 (ranked by number of listings they contain) platforms and I just tinker away once or twice a month, adding support for new platforms.
I should probably release it and get some feedback at some point, but I suffer a bit from "it needs more polish before I let people other than my family and friends use it"
There's just shy of 90,000 unique listings I'm tracking (i.e. after de-duplication) on these platforms.
On the traditional classifieds sites and things like Facebook groups focusing on these, there's a significantly smaller number of listings/ads for business sales (e.g. a couple of thousand).
I think where there are definitely hidden gems is where there are many small business owners at or close to retirement age where they haven't planned for a sale at all. For example, a family member nearing retirement age has a small business they're just intending to shut down because they "couldn't be bothered" selling it. I've heard people have had reasonable success just approaching local businesses like this that have older owners OR asking accountants if they have any clients that are thinking of selling.
Works like this in US too. Commercial brokers rely on their network and not listing things on a market. Even most commercial real estate property for sale in the US is unlisted. It’s a weird industry, there are listings site but they only reflect a minor percentage of what you’d find if you drive around looking at for sale signs.
This can be a reasonable place to go to look for distressed businesses/assets too and I've considered using them as a source with my aggregation/search engine, though they don't really have the same type of information as a business for sale listing so they fall somewhat outside of the main type of results that I otherwise display.
Other reasonable places I've seen too, though in incredibly low volumes (think 0-3 listings a month), are commercial auction houses/sites where they'll list a business for sale or the full assets of a business. The main issue with that it is that they're so low volume that I'm not sure it's worth spending the time ingesting them this early on while there's still many other larger listing sources.
In my ex-employers case, the sale was what's called a "pre-pack" sale. That means the sale was advertised and proposed before the administrator were appointed and the administration was noticed. So you would not have found out in time from the filings, only from ip-bid.com. I don't know if Australian law allows pre-packs.
https://github.com/jerpint/context-llemur
It’s a CLI/MCP context management tool that allows you to easily move your project context around to your favourite LLM clients/IDEs
Fun so far!
I've tried using the official GitHub Slack integration (https://github.com/integrations/slack) but found it limiting and unmaintained by GitHub. For example, at the companies I've worked at, we want to get notifications sent to a specific channel when there are deploys to the "production" environment on GitHub. The official integration doesn't let you filter events by environment, so it's all or nothing. Your Slack channel for production releases will be filled with staging and qa notifications.
I designed it so users can filter on essentially any field of any event - deployment environment, branch patterns, file paths, PR labels, commit authors, etc.
It's at chivesbot.com as a hosted service, however, the signups are disabled right now as I'm working on some core features, but here are a couple of screenshots of the filter creation: https://imgur.com/a/pSiolWu
I'm looking for early beta users and feedback, so if this problem resonates with you, my email can be found in my profile.
It’s called Wednesday.
Check it out: https://wednes.day
Expected support for Lua 5.4 and luajit. At first entirely in Lua with long term goal to compiled Lua modules (merging Wax)
The goal is to make Lua the first choice for system scripting in POSIX systems for Lua users without thinking twice between Lua, Sh and other tools like Python, Ruby etc.
I have many system scriptings in Lua but not in a easy way of reusing libraries. Also I don't like to think in creating Luarocks packages or deal with unstandardized ways to write code.
It's early days but it's fun
Matry - a keyboard driven tool for designing in the browser. It’s like a cross between Storybook, Webflow, and Vim.
RealTea - comment on Zillow/Redfin listings and share info that wouldn’t otherwise be publicly available.
Solarite.js: https://vorticode.github.io/solarite/
The core philosophy is: your notes should be yours forever, that also includes the software stack it's built on. Everything is stored locally in SQLite with standard Markdown, so no vendor lock-in or proprietary formats. The interface is very minimal without flashy colors or icons, so you can focus on your thoughts.
Key features: instant full-text search using BM25, flexible tag organization instead of rigid folders, rich Markdown support with formatting toolbar, and custom "Focus Modes" for different contexts. It's a PWA that works offline (read-only).
The tech stack prioritizes minimal dependencies - no NPM (self-hosted Preact instead of React), Golang for rich standard library, etc. The whole app can be run from a single binary, so no messy installation requirements. Docker is also available.
I tried to design this from scratch, learning about typography, colors, spacing etc. It turned out better than I expected!
I've switched to this as my main notes app and I'm happy with it.
Landing Page: https://www.sheshbabu.com/zen/
Demo: https://zendemo.fly.dev
Vibe-coding for 6 months as a solo dev (on the side) and loving it.
The idea is to facilitate communications between ship and the shore party, as well as to have alerts, some commands ("boat, turn deck light on") without reliance on telecommunications infrastructure.
Down the line communication and telemetry sharing between different vessels is also potentially interesting.
Frustrated with running 10+ different checks on domains/websites I've built or working on with 10+ different services, I've built - with help from Claude Code - a Django app that tries to wrap all those key checks into one place. On top of that, I've built in scheduled monitoring and alerting.
It's been a great experience learning about the intricacies and nuances in different website set-ups, the complexities in avoiding false negatives, fun with CloudFlare workers, agentic coding and much more.
The site is still running off a RPi (Coolify) in my home-server behind a CF Tunnel at the moment, so won't link directly here - but ping me if you want to give it a test-run.
All the PostgreSQL data lives in your browser, and you have unlimited PostgreSQL servers that persist the data locally without installing anything.
New: a deep research mode that, on demand, crawls thousands of product pages and uses visual LLMs to read label photos (ingredients, counts, square footage) when the text is messy. First run takes ~60–90s, then it’s cached.
A good torture test: 20×25×1 MERV 13 home air filters—listings mix single/4/6/12-packs and vague claims (“3-month,” “allergen defense”), which wreck per-unit comparisons. I’d love feedback on misses (coupons/Subscribe & Save/region), categories to add, and to collaborate with a grocery-list app, budgeting tool, or anyone in the frugal/deals space. chris@popgot.com
Eventually products will overlap between search queries, so we can serve fast and low latency results that have been pre-processed by LLMs. That will be near zero cost. And of course LLM prices will continue to drop quickly.
We monetize via affiliate fees -- you buy something off that list, and we get 1-4% back at no cost to you.
If people are wearing AR glasses into big box stores that are comparing prices in real time, I could see there being a real time auction for CPG pricing the way there are for website ads now.
edit: also this doesn't seem correct:
Everything above will save you $57.65 on 33 fl oz
https://popgot.com/shampoo?attributes=scalp_concern%3Adandru...
If you hover the text it explains the logic (you can see that in this screenshot https://imgur.com/a/hO7fiWR). But to replay the logic here:
Equate 2 in 1 Dandruff Shampoo 28.2oz is 21¢/fl oz (for 33 fl oz it costs $6.99) is the Popgot choice.
But the most popular (e.g. most reviewed) product is "Nizoral 2-in-1 Anti-Dandruff Shampoo" and that costs a whopping $1.96/fl oz (33 fl oz it costs $64.63)
So yes, the most popular anti-dandruff shampoo (which I used to use, until I saw this shampoo list https://popgot.com/shampoo?attributes=scalp_concern%3Adandru...) is literally 20x more expensive, so you can do a lot better by picking alternatives at the top of that list.
Not sure why you didn't see toilet paper, but it is right here: https://popgot.com/toilet-paper
Translate docx/pptx/xlsx etc while keeping document layout and formatting.
Most core functionality is finished, and it's ready to go. Still some work to go on docs, tutorials, and polish.
It records my voice, transcribes it locally using faster-whisper, and copies the transcription to my clipboard. Check the demo linked in the GitHub repo to see how it works.
I use it especially with Claude Code to provide detailed context for the outcome I want to achieve. I ramble for 5 minutes, and then paste the transcription to Claude Code, instead of having to type all my thoughts all the time.
The workflow is like this:
$ hns # start recording
<talk>
<Enter> # clipboard now holds the text
[1] https://github.com/primaprashant/hnsBasically, nattokinase is an enzyme made by natto (Japanese fermented soybeans). It’s been show clinically to help against blood clots. Unfortunately, the clinical dose is 5x the quantity in a serving of natto.
That’s too much natto to eat! So I’m working to genetically engineer a normal, typical natto strain to just over express that one enzyme, so 1 serving == 1 clinically relevant dose
My last was genetically engineering yeast to produce grape aroma, then baking bread. Was great fun feeding people it. I want to eventually throw GMO dinner parties in SF, but only with GMOs I’ve created with my own hands
I flew Rapid City -=> Minneapolis -=> Seoul -=> Ulaanbaatar -=> Kyzyl to get here; it was quite a harrowing journey, not only for the many difficult flights but also an hours-long interrogation by Russian security (though they were polite and professional - I'll tell that whole story another time).
I'm increasingly convinced of the connection between traditional music and a free internet. As some of you have followed, I have done a few deep-dives into the bluegrass roots of the early Bay Area cypherpunk scene. Because traditional music necessarily lives outside the complex of copyright and intellectual property, I believe it is a natural and necessary fuel of innovation of free ideas.
I can scarcely believe this is happen. t-minus one hour. See y'all in 10 days. Then I'll be online for a few hours, then I head to another similar retreat in Mongolia.
Waywo will help users quickly digest hundreds of project descriptions, explore similar projects, deduplicate projects across threads from previous posts, visualize a graph of all projects, and more! I'll be documenting my approach to building this with coding Agents like cursor and Gemini CLI
I'm building Waywo for the Redis Hackathon on DEV.to that is running from now until August 10! Follow me on DEV/GitHub/X (@briancaffey) to see how this project turns out!
https://youtu.be/_iGn_pZ3IkY?si=x4ijZdAP-suhuJ7Y
https://apps.apple.com/us/app/manger-animal-manager/id674269...
I came up with a suffix-sorting index for this domain that's interestingly simple. Most algos for this use a generalized suffix tree that's built by concatenating all the strings into one giant string and feeding it into a conventional suffix sort, but that has some big constants on the indexing throughput, due I think to the overhead of handling one giant string instead of a bunch of small independent records.
In the latter case, by making the structure slightly simpler and search slightly harder, I can get indexing throughput in the GBps, at least for the sorting part.
The output of that in its simplest form is a 4n or 8n-sized set of int's, but it can be fed further into a compressed rank/select data structure for various space/indexing time/retrieval time tradeoffs, and I don't think those are slow (eg Roaring Bitmaps)
I'll post this on show HN if anybody's interested; I'm still writing up the details, as I've barely gotten the POC code working.
I'm sure the novelty will wear off soon, but it been fun so far.
Reception so far has been positive. It's nice to be done with a long project, though this one will never exactly be done--I need to make sure there is a new puzzle for each day, though I have several months' worth prepared in advance.
I've started designing my next game, but it's probably a couple of years out. I just need something for that part of my brain to occasionally chew on.
- EV charging software
- finish writing book on tech topic
- finish writing book of short stories
- planning next upgrade of LatLearn, my Golang latency instrumentation library (along with a dev session screencaat)
- planning next upgrade of Slartboz, my sci-fi post-apoc comedy adventure real-time Rogue-like game (along with a new demo screencast)
I get "time blind" when I'm fixated on something like work, programming, reading, research, etc. While it can be a good thing, it also means I forget to eat, don't take breaks, miss meetings, or just spend way too long doing one thing and end up wondering where the day went. Typical notifications don't seem to snap me out of it either.
The app creates a thin, always visible line at the bottom of my screen that shrinks inwards as time passes, at the end of the allotted time the screen will blur preventing me from doing whatever I was doing and snapping me out of my hyperfocus state. I can choose how long the timer runs for and how long the screen blurs for. Tonight I added a loop feature so I can use it like a pomodoro timer with enforced breaks.
It's a simple menu bar app for MacOS and could be better, but it does what I want it to do. I've been using it for the past week and found it really helpful.
I haven't used Swift before so it was a good learning experience too.
It's the same principle as a Time Timer (timetimer.com) which I used previously but I find my app works better as the screen blur actually prevents me from just continuing whatever I'm doing, and the bar is always in my line of sight.
https://news.ycombinator.com/item?id=38274782#38276107 (125+ subcomments circa 2023)
> your brain will try to sync with the light that you can barely see, calming you down and allowing you to go focus-mode with the task in ha[n]d
I want this to be a tool highly useful for people who have complex health issues, are working towards ambitious goals, or just want to regularly reflect on their day.
I'm building it since I couldn't find a satisfying solution anywhere. It's local first and does not force you into a subscription, or tries to exploit you with any other dark patterns.
Built an AI-powered system that finds the highest-paying roles that match your resume and respect your salary requirements
This is my first time doing anything with frontend more complex than an image carousel, and I have occasionally felt that I'm in over my head with things like multithreading and audio playback, but it's immensely satisfying seeing the app come together.
I am extremely impressed by the Leptos framework [2], and I'm thrilled that I haven't had to write a single line of JS, even when doing DOM interactions or communicating with web workers.
Once I polish up the tracker frontend, I'd like to add a backend and potentially try to release it as a paid app.
It's pretty simple, JSON data that I manually fill out and display in a grid. Takes some inspiration from Letterboxd lists. Future plan is to run online and in-person exhibitions for smaller curation and to commission writers and other curators to provide further depth and insight into a list.
I have no plans to turn this into a profitable thing. It's a pure passion project which I hope will benefit researchers, academics, other curators, and the whole game community. It's a resource as much as it is a celebration.
And Flow – a terminal app that helps you track deep work without distractions. It runs locally, keeps things simple, and protects your attention instead of just counting time.
Made for developers who want calm, not noise.
GitHub:
We're trying to figure out how to narrow down our target audience and get to early revenue. Also, how we can grow the extension adoption.
[1] https://marketplace.visualstudio.com/items?itemName=autodba-...
I wanted interesting looking typefaces for my printmaking assignments when I was taking studio art classes on the side in university. Now that I've been laid off, I wanna polish it and see what other people create with it.
Lots of room to rewrite and improve it, but I have job applications and interviews to get through.
I was working on a routing application for San Francisco (+Daly City) where it includes being able to put how willing you are to walk to certain bus routes instead of how most apps try to put the least amount of walking and don't consider that if the wait for a bus or train is long, then I don't mind walking to connect to another route that takes me to my destination faster. It takes tree shade, elevation, and marked off location to avoid into account.
It evolved into more of tool for planning leisure walks and runs that could hit places I'd want to visit with a loose timeline--for days where I would want to wander and then end up at a particular stop/station to get back home.
Talking about them here has more ideas churning in my head and reminds me to step outside of my little bubble to remember why I truly love coding. To make fun and convenient experiences.
I created a simple Hacker News Redesign extension to make my mobile browsing experience better (larger touch targets, prettier UI and texts). https://apps.apple.com/us/app/y-redesign-for-hacker-news/id6...
I made a widget that better shows my work shifts (I work nights and the default calendar app displays overnights as two days in the zoomed out monthly view, so this improves upon that and also counts how many shifts I've completed in a month while looking nice too). https://apps.apple.com/us/app/the-next-shift-widget/id674063...
I wrote a simple MacOS app that lets me drag and drop screenshots then choose between a variety of "device frames" to create a consistent style and speed up my workflow.
And now I'm working on some plug-ins for open source apps that I use. Generally just doing small things to improve my workflow and enjoyment with my hobbies.
It’s aimed at people who want to be less dependent on foreign platforms, especially with the current shift away from globalization.
Still early days: only about 20% of the planned categories are up so far.
So, I've been tinkering around with a library that can generate schemas for structured JSON outputs, according to a Typescript-like custom schema definition: https://github.com/nadeesha/structlm
So far, I've been seeing promising results with accuracy on-par or better, but using 20-40% less tokens than JSON schemas.
I redesigned it to be much smaller and cheaper (surface-mount), made it an IoT device, and various other changes. Will order PCBs in a bit, hopefully it works well.
We don't have anything like Blitzortung in SE Asia as far as I know, and it would be pretty useful to me to detect lightning storms before they hit. The obvious application is to add it to my motorbike (driving a motorbike in a heavy storm is a necessary but miserable part of life here).
Bigger picture, there's no market for it, simply because it's cheaper to not buy one (I live in a very cost-driven market). However it would be useful to me personally.
VT Chat, is a privacy-first AI chat application that keeps all conversations local while providing advanced research capabilities and access to 15+ AI models including Claude 4 Sonnet and Claude 4 Opus, O3, Gemini 2.5 Pro and DeepSeek R1.
Research features: Deep Research does multi-step research with source verification, Pro Search integrates real-time web search with grounding web search powered by Google Gemini. There's also document processing for PDFs, a "thinking mode" to see complete AI reasoning, and structured extraction to turn documents into JSON. AI-powered semantic routing automatically activates tools based on queries.
Built with Next.js 14, TypeScript, and Turborepo in a monorepo setup.
Some samples:
- https://veneer.leftium.com/v/s.1o5t26He2DzTweYeleXOGiDjlU4Jk...
- https://veneer.leftium.com/v/s.1pk4C9jFI02CnZaxo9obsD4oAmLla...
Vercel is ending support for Node v18. Instead of updating my old app, I decided to finish the rewrite of the better version. The old version currently powers this site: https://viviblues.com
Compare to the new version:
- https://viviblues.com/pretty/sheet?u=https://docs.google.com...
- https://viviblues.com/pretty/sheet?u=https://docs.google.com...
Think Zapier or n8n, but you either use existing processing nodes or upload your own code, written of course in any language that compiles to Wasm Components.
It's week 2 but it works and it's live at https://pipestack.dev
hi (at) pipestack.dev should work
I am working on world model for computer systems. I am designing experiment and benchmark for LLM Agents to see if they possess understanding of "Linux". World model for computer systems will be crucial next step for computer use agents to reliably plan their actions over long horizon.
Links to draft: https://open.substack.com/pub/disastermanagementtechnologies...
An iOS app that uses your AirPods' sensors to catch bad posture in real time.
How it works:
Real-time tilt tracking – Your AirPods already have the tech Customizable alerts – Adjust sensitivity so it nudges you only when needed Prevent strain before it starts – Stop neck pain and headaches at the source
https://www.airposture.pro/ (TestFlighting)
I've actually started getting some back and fourth feedback with a couple users, which has kept me motivated and validated. But I need more organic traffic somehow. I've recently released a new usecase (https://theretowhere.com/vacation) that might be more well suited for vacationers, so let's see if that sticks.
Funny anecdote from today - I just set up Slack notifications so I get more instant knowledge of errors on the platform, and the first notification came in just a couple moments after I deployed. It was for an error that I thought noone would run into for a couple days. Imagine my (bad) luck!
Might be nice to have an easy way to enter all subway stations in a city and create the heatmap based on that.
For my use-case the interface you created isn't the best. Now that I'm searching for a new home I'm interested in finding a place that has a bakery nearby, but it doesn't really matter what bakery. The same goes for restaurants, pubs, ... For this case there are too many places to add them "manually".
Thanks for creating this, I will be definitely using this in the the coming months.
When using developer AI agents like Claude Code, often they output, and use, .md files like CLAUDE.md, README.md, etc. You largely want to just read these, and if Claude updates them, read the latest version.
Other markdown apps incorporate editing, split screens, etc. I just wanted a neatly formatted read only view. And if you want to edit them, just use something specifically designed for that like Sublime Text, my viewer will instantly load with the updated file.
Anyway, check it out: search for "ViewMD" in the Mac App Store.
I think it might be better to go the other way, and have a pattern-matchey form generate the defmethods instead, but I need to gain tacit knowledge about it first.
When I read social spaces like Reddit or X, if the government has done anything contentious you get nothing more than strident left takes, or strident right takes on the topic. Neither of which is informative or helpful.
So I am setting up a site which uses AI which is specifically guided to be neutral and non-partisan, to analyse the government actions from the source documents.
It then generates: - a summary, - expected affects, - benefits, - disadvantages, and - ranks the action against 19 "things you care about" (e.g. defence, environment, civil liberties, religious protection, etc.)
The end result is quite compelling. For example here's the page that summarises all the actions which are most, and least, beneficial to individual liberties:
I sent feedback to ground.news the other day asking them to have a toggle to get rid of the left/rightometers on their articles.
So much of this nonsense is framed around some arbitrary understanding of left/right by americans which has basically no bearing or interest to me. Its helpful to have a source of news that can identify coverage gaps, but I dont need everything helpfully added to some subjective seppo political bucket.
Even in your example you dont explain whether you are talking about positive or negative liberty, a relatively neutral framework to discuss liberty that pre exists AI.
We have gone to a lot of trouble to try to engineer the prompt to make it clear to Gemini that it should take a "non-partisan and unbiased" view in all the analysis. This is an attempt to get away from any person's opinion, including ours.
Obviously, whether you think it achieving that ideal is in the eye of the beholder :-) But it is certainly less biased than most mainstream media, and social network echo chambers.
Interestingly I found Claude Code to be the only LLM good at designing frontend, asking it to make it look better actually helps
It’s all downloadable for free since I make a living off databases so I don’t need to make money off this. I did this to give some closure to ideas I started working on 30 years ago.
For example: "Paul Graham interview best founders" (surfaces moments where pg talks about founder qualities): https://www.youtube.com/watch?v=BXqk9QaV-ag
Goal is to build a social network that doesn't harm the user in any way and provides full control over their data.
Here's the waitlist: https://waitlist-tx.pages.dev
Let me know if you have any questions. Email is cornfield.labs@gmail.com
What plants you should grow if you want a "second harvest" of beautiful dried seedpods, to decorate your home.
Decorating ideas for the round concrete pillars that many new condominium units have nowadays.
"Juicy" text editor ideas - making the most gamified text editor. The absolute opposite of the zen editor at https://enso.sonnet.io/
Beam is perfect for sharing sensitive documents, transferring files when you can't use USB, email, or cloud storage.
Try it here: https://get-beam.vercel.app
Come decompile with us! https://github.com/doldecomp/melee
https://diabetes-diary-plus.com
https://apps.apple.com/ch/app/diabetes-tagebuch-plus/id16622...
Join the waitlist here: taiko.taikohub.com
The current app is being rebuilt as it sucks and the device is under active development.
We've gotten the following to work:
1. Emotion detection with barks
2. Cough detection (Kennel Cough specifically)
3. Identifying a dog from their bark
4. Video analysis of dog behavior
5. Identifying key parts in dog vocalizations (similar to phonemes in people)
6. Some basic intent detection (or what people call translation)
If anyone is good at μpython + TFLite and can help us transferring our models on device that'd be awesome. Currently our set up is super hacky.
The tool is written in Go and exposes a JavaScript interface (like k6) to generate manifests using both template-based and object oriented approaches. It is similar to cdk8s, but is more flexible, doesn't require a dev environment, and allows sharing packages. The apply mechanism will be like kapp, but using kubectl's apply sets.
It already has the features to generate manifests to be used in GitOps. With this addition and the next one, which will be waiting for generated resources, it will become a fully featured package manager.
Working on an idle/incremental game based in an office environment.
There is a playground which is using a C compiler and WASM, and so is quite fast, while running fully in the browser. Theres also a (online) conversion tool to convert and compare source code. There are some benchmarks as well.
Writing my own (concise, simple) programming language was a dream for me since I'm 20 or so. Feedback would be great!
You have the "Non-Features" section of course, but I'm looking more into what I'd be losing by going from C to Bau. Bau's price for safety.
> what I'd be losing by going from C to Bau
Well, you can add native C code, so in theory you do not miss much. But in practise, yes of course many features are missing still.
I am implementing both specific test cases and automatic vuln hunt (ie. Fuzzing).
This platform is entirely self-funded and was created with passion, hard work, and faith in our goal. However, at this point, even modest assistance, such as paying for our internet, can have a significant impact on future advancement.
Find me on LinkedIn if this speaks to you or if you would like to work with me, grow together, or just have a conversation. Connecting would be wonderful.
One community at a time, we can work together to illuminate Africa's events landscape.
Some links :
> Poor man's bitemporal data system in SQLite and Clojure (evalapply.org)
> 13 points by PaulHoule 1 day ago
You can build a chat agent with some advanced features with NoCode, and beyond that with LowCode.
Based, in part, on my work on my open source project, FileKitty: https://github.com/banagale/FileKitty
Other efforts:
- Ways to rip, parse and fuse various content typesinto simple and well-indexed input documentation for use in LLM contexts
- Reverse engineering various AI web chat frontend stacks
- Generalized subagents and commands useful for claude code
- Onboarding SWEs using claude code
Next, I'm working on a TUI app (using Textual) for board games like Tic Tac Toe and Connect Four. These will also have a modified rule that requires forming a square instead of a line.
At the moment I'm building a C++ version of Tic Tac Toe, would be cool to implement it in Python.
It's an electronic product database, where you can search through products and see all of the detailed specifications about each product. Has an API as well.
Currently integrating electronic news / reviews that will be linked to products.
Inspiration from shuffle puck cafe and pc98 / japanese bar table card game.
Little project but fun :)
Most people use it for analysis and ops work, + data science.
I find myself using Sourcetable to run our company: query the DB, analyze the user data, make projections, write copy, help with technical SEO (search the web, scrape data, check status codes, clean my sitemap, run vector space analysis etc.), talk to apps, financial modeling for our operating model + forecasting, etc.
The main idea I'm thinking about is LLM related: we're all having a social experience with machines (!) while building the machines (!!). I'm not sure my brain fully grasps that it's talking to silicon while I work.
The union monetises this by selling privacy preserving aggregates (think ‘where is everyone in London right now’, or ‘where did people commute from?’), and acts on behalf of union members to stop data brokers selling their location data.
The question of who can de-identify or unmask the data is there, but I could see the capability being required for gov, military, and police, and then as a premium service to customers.
More or less my initial approach to this is you take a grid, and you show movements/density on that grid. If necessary you coarsen the grid to avoid reidentification of individuals, and ultimately to get a good picture of the population given the biased sample which is the union membership, you need a statistical model on top which also helps from a privacy perspective.
State actors demanding individual location history is definitely an issue. I have a few possible approaches in mind to defend against that.
https://gather.buzz - an influencer tracking / CRM platform.
But I'm focused on building a good reading experience overall - which helps you learn and understand the content more easily. Imagine Macos preview integrated with llms. Currently, the web app uses the llm apis but eventually will add support for local llms as well.
I'm aware of other apps that have done something similar, but want to see this through for myself.
Given a database[1] and a set of DDL statements/migrations you want to check, pglockanalyze will open a transaction, execute the statements, read the pg_locks view to analyze the locks they acquire and rollback (or commit, depending on the flags you passed) the transaction. Then, it will output the results for each statement.
I think there's merit in this idea, that said it's very much an experiment so there could be flaws and/or corner cases that this strategy won't work well for.
It's meant to act as a complement, not a replacement, to things like static analysis and the official Postgres docs.
https://github.com/agis/pglockanalyze
[1] typically an ephemeral database spawned by your CI pipeline
It's a motivated video introduction to Elliptic curve arithmetic where in 10 minutes we write (in the C language) a legal bitcoin wallet bruteforcer.
Video edit: https://leetarxiv.substack.com/p/a-programmers-introduction-... Bruteforce bitcoin puzzle wallet in C : https://leetarxiv.substack.com/p/hacking-dormant-bitcoin-wal...
[1]: https://news.ycombinator.com/item?id=43595184 [2]: https://news.ycombinator.com/item?id=43600346
I have also read that thread previously and disagree with ads being the only way to go. I think if the game is good enough and, for three emojis in particular, people learn more about their target language, that they will happily pay for it. I know I would at least, and I personally don't think I'm so weird. Friends have surprised me by signing up as soon as I sent it to them, specifically on the language learning value proposition, so I think there's something there. But, if it was just a puzzle game for fun only, then I might agree that the model doesn't work. We will just have to see!
Bit of context, I have background building authentication systems and almost all the time its built as just another feature even though its THE FEATURE which gates all other features.
Original Show HN post: https://news.ycombinator.com/item?id=32551862
The goal was to make a simple way for recruiters to create coding tests to screen people applying for jobs, and to make the process quick and easy for applicants. The default is a 15 minute multiple choice test across a few different domains.
I will eventually monetise it (when I have more free time) but for now it's free (1000 invites) so I can get feedback and improve it
Initially wanted to make this before my app but it was quite a bit to process all the GTFS data. But now that the data is already processed for the app, the visualization was quite easy to make!
Currently trying to add geocoding so users can search for destinations for routing. Been interesting as I want to avoid Google Maps and other private data sources so open street maps it is.
Our flagship feature is Agentic RAG, which is quite difficult to build from scratch.
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
Publish your writing effortlessly. All you need is email.
Blogging, micro blogging, email newsletter, etc.
Freemium, Open source, Ruby on Rails.
I've been working with Claude to bring some new features, like printing a proof - unsuccessfully! https://github.com/OolonColoophid/bakerStreet/issues/9
I’ve explored a few different projects since January but nothing stuck, now working on https://pitch31.ai a SaaS that turns any OpenAPI specs (or Postman collections) into a conversational AI agent, so you can work with APIs without the need of a UI.
We are still early on the product and looking for some user feedback to improve it. Planning to launch it on major platforms in the next month!
Feel free to try it by downloading here https://learnmathstoday.com/download/
Unlike most Civil 3D portals out there, this site provides deeper explanations of Civil 3D’s inner workings and very few (currently zero) video tutorials. That’s because I believe video tutorials are already widely available elsewhere.
I originally set up this website to teach myself Civil 3D. Along the way, I decided to "open-source" my learning process so others could benefit as well.
https://dusted.dk/pages/ordeal/
Two features are missing, I might find some energy to make them at some point: 1. right now only the entire "ordeal" can be moved, so situations with multiple fix points are annoying, I'd like to be able to "anchor" multiple tasks and have the tasks between stretch and shrink as needed. 2. I want to add a red horizontal line that's moving and shows the current time.
In my opinion, those mainframe-era dinosaurs are conceptually far more superior to what we have today on servers and cloud systems.
Happy to discuss on that topic or to discover new initiatives / sources
> Sonnets of dark-held love - A sequence of poems after the 'Sonetos del amor oscuro' by Federico García Lorca - https://rikverse2020.rikweb.org.uk/book/sonnets-of-dark-held...
> End Time - A collection of noir-themed poems for the last days of the world - https://rikverse2020.rikweb.org.uk/book/end-time/
We are building a modern alternative to Jupyter, something like Cursor meets Jupyter.
This is for Daestro[1] which is a cloud agnostic job orchestrator.
I have got the main functionality working and I've been using it myself in a crude way (using sqlite client directly for data entry etc.) for about a week. It was not meant to be a serious project to begin with - I just wanted to build something to evaluate Tauri for desktop apps. I am still not 100% convinced if such a tool is worth building, so the code hasn't been published anywhere. Do you care enough about "screenshots management and cleanup" to use something like this?
In any case, I would have it as a user option to carry through the action, i.e. "the following is scheduled for deletion saving X disk space, do you wish to continue?".
Personally, I think I'll stick to pruning my screenshots folder once a year with an image manager. I'm pretty content with niri's builtin screenshot capture or something like Greenshot on windows. Just my two cents
Recently added a new feature – Java property file editor (took me too much time than I planned, but it works now)
Initially this started off as a feature for a PR on venom[1], an integration testing tool, but as I thought about it more - it made sense to maybe make a standalone tool that can evolve a bit different from venom. It's still very early but it works to perform some basic actions using typical CSS selectors that playwright supports.
[0]: https://github.com/zikani03/pact [1]: https://github.com/ovh/venom
This is why I’m writing a tool to simplify administrative work with these files, so you can quickly see where spreadsheets diverge, propagate updates, make them version controlled, and many other good things, which we have in typical app development, but still miss in spreadsheets management.
It uses a lightweight, open-source agent[1] that collects data and pushes it to the backend, so it works behind firewalls and doesn’t require any open ports or scraping setup. The goal is to get useful monitoring and alerting with minimal effort: one command install and a UI-based configuration.
For the landing page, I think it'd be useful to see an actual screenshot of the UI. Also, what I'm looking for in a solution like this is to receive this information passively — I don't want to need to proactively watch a dashboard. I would want to receive email alerts when, for example, I'm running out of disk space. It says on your landing page that you provide this feature, but it also says it's configurable. Everything on Grafana is configurable, but tbh it's a PITA to configure. It'd be nice if SO just worked OOTB wrt alerts.
For the alerts it is configurable pretty quickly (you just select what you want to monitor, a threshold value, and a notification channel). But I’ll look into having some sensible defaults built in so it works out of the box
We recently added serverless functions for backend support, specifically for the bhvr[1] stack though they work with just about anything.
[0] https://orbiter.host [1] https://bhvr.dev
It was also important to me to provide a non-hyped, balanced view (hence the name), including pointing people to realistic assessments of the effectiveness of these tools and highlighting the risks and concerns.
Low friction Markdown based voice journaling.
Lets you hide 1000s of images, pdfs, videos, secrets, keys into a single portable innocent file like "Vacation_Summer_2024.mp4".
Annotation ideas: https://github.com/Kareadita/Kavita/issues/3890 (since from research, seems there is a strong need for this)
When learning a new language, it’s surprisingly hard to find free, engaging reading content at your level. Even paid options often aren’t that great. I’m trying to solve this by leveraging LLMs, building useful features for learners, and focusing on user experience.
Just launched the beta yesterday on Reddit: https://www.reddit.com/r/dreamingspanish/comments/1matsp6/fr...
The US domestic (family) court system, in Ohio in particular, has become functionally corrupt.
The Ohio bar association has co-opted the domestic court system for their own revenue generating stream. The judges profit from kickbacks in the form of campaign contributions every four years from the very attorneys are supposed to be judging against, this is not just the appearance of corruption it is a system built on it.
On top of that, all the judgments are sealed and secret, meaning that there's zero records and zero accountability for any judges or lawyers in the system. This means the judges only incentive is to judge in a way that is most in line with his best interest. In practice this means that what little data we have shows that almost every time the judge finds in favor of whichever attorney donated the most money to his campaign.
This has been going on for the better part of 20 years and is the reason that 85% of divorces end with single parent custody in Ohio, it's because the judges are getting paid off.
I'm blowing the whistle, working on websites, printing signs, sending out mailers, Court watching and taking notes on these corrupt evil fucks, doing anything and everything I can to get the word out that the Franklin County Ohio domestic court system in particularly has been corrupted by a small group of judges, mainly Judge James Brown. I'm doing everything I can to get the word out about these evil and corrupt judges, the biggest hurdle seems to be how much money they are making and that people believe that once a lawyer gets rich and powerful enough they're uncorruptible, which is the exact opposite of the truth. The judges are the ones running a corrupt, pay to play, system of kickbacks with themselves at the top. The judges are the top lawyers and the lawyers are pieces of shit.
They are literally horse trading children like they're chips at the casino and what they've done to a generation of young people and millennial fathers is pure evil.
https://www.brief.audio -> Have a go, we're primarily testing internally atm, will look to make it more available and a bit more polished next week.
Primary focus is getting the content right this week with the audio script and hosts being user & content dependent. :)
Especially interested in hearing what content to pipe in (we're looking to put in our hackernews.coffee as a feed for example, but also other key news sources).
Keeping these projects separate allows us to test ideas that orbit around a theme (not 100 % sure what the theme is yet, but it features personal, anti-slop content, while still using llms.)
Do LLM Agents really understand Linux?.
"Understanding" meaning.. can they predict the outcome of their actions before executing them? Computer system is irreversible.
World model for computer systems is crucial next step for computer use agents to reliably plan their actions over long horizon.
Links to draft: https://open.substack.com/pub/disastermanagementtechnologies...
Trying to document and map as much of the publicly accessible stained glass as possible. The goal being the next time you visit a new city or town, you'll know where all the beautiful stained glass is to go see. Just recently added support for countries outside of North America. No exciting tech (vanilla HTML/CSS/JS). But excited for folks to check it out!
So, I'd add more visuals to the main page just like on your 'search' link, move the map to the bottom, let visitors know of every new location to keep them engaged and coming back for more, make a gallery sorted by most liked, viewed, commented, valuable (subjective based on history, cost, location, etc)
My other app (https://slopornot.app , an on-device AI generated text and image detector) has been stuck in App Store review limbo for almost a month now. Probably because it's an on-device ML app and not another OpenAI API wrapper, which seem to get approved by Apple, really fast. ¯\_(ツ)_/¯
I've been using it for months now, and the playlists generated are my top listened to playlists - I'm literally listening to them every day. Personally I'm super happy with it, and a few friends have also been using it quite successfully.
Are you on mobile by any chance?
You could also clear your cookies - in case you managed to log-in the previous time and they got set.
It does this by building a profile out of a small number of selected past articles, and we make the profile and how it produces recs from the profile transparent and editable. Especially after feedback on HN (https://news.ycombinator.com/item?id=44454305), Im trying to get to grips with why people seem to care as much about seeing how their recommendations work as they do about the actual quality of recommendations themselves.
I'm increasingly convinced its due to how many opaque LLM-powered everything and black-box recommendation algorithms there are. People want personal content systems (they are useful for sure!), but theres a lack of ones where they stay in control of what 'personal' actually looks like.
After 10 years in defense tech I’m sure you are very aware of this sort of thing, but how worried are you about accidentally leaking some non-public info? I guess one nice thing about public info is, well, it is public, so you can just use whatever’s public.
Not that I know from personal experience or anything... /sarc
It's a simpler alternative to Nextcloud and ownCloud (built with TypeScript, Deno, Fresh). Recently it got a CalDAV and CardDAV server, and I'll be working on the calendar and contacts UI next.
[1]: https://justref.app
I built a competitive rock paper scissors game. It's got ranks, match-making, and a leaderboard.
[0] https://store.steampowered.com/app/3627290/Botnet_of_Ares/
I used to post personal updates in group chats - scattered, repetitive, and easy to miss. Now each topic (like raising a child, a trip, a project) gets its own private space - an Ongoing Thing - where updates stay organized.
People get email notifications for new posts. There's no feed, no comments, no ads - just a calm way to stay connected.
We just added Partner Things so multiple people (like two parents) can post to the same thing. Our small user base loves it.
Still figuring out marketing. Until now, it's mostly been word-of-mouth. Feedback welcome!
One thing that kept tripping me up when thinking about this was pricing - everyone is so conditioned to think that social media is free that this will be a huge hill to overcome. Your pricing, although I think if done right feels very fair, instinctively makes me recoil a bit.
It felt weird paying for email after using Gmail for so long. Even now, most people do not care enough in order to justify paying for it. This feels similar.
We can't afford to offer free-forever accounts to everyone - not without compromising on principles. But we are thinking of ways to make it more accessible at least for some people. Open to ideas!
The "social media" baggage, and the pricing bit you pointed, those are definitely the two biggest challenges imo. If this was not something we needed and used ourselves, we probably wouldn't have built it.
1. Inbox Toll - solves the "real human, still spam" problem by making the sender pay a fee before the email actually lands in your inbox. Set your toll as low as $0.01.
2. Auto Label - automatically label (+ optionally archive) all non-human messages into logical categories like newsletters, receipts, promotions, etc.
3. Inbox Cleaner - time-based inbox scan that bulk deletes non-essential emails.
---
junkmailcleaner.com
My creation this week, i hope some find it interesting and fun. Last night I was bored so I created a python app to analyse historical euromillions draw numbers. I added tons of features and some statistical analysis features to examine patterns and bias. Good luck if it helps you win I would love to know, also let me know your thoughts and experiences, what you found, how it helps you understand the draw more, and any features I could add in future. I could not find many official sources for entire historical draw data; the official lottery website only appeared to provide 6 months historical data and i wanted to analyse all past draws.
It's been useful for adding AI features to Rails apps without the complexity of managing separate services. Would love feedback from anyone working with LLMs in Ruby.
The plugin offers users a way to input their own block lists, a pre-existing one, or make use of the API which is constantly getting updated.
As a first time Wordpress plugin developer, the approval process was a bit slow but it’s like that for a good reason.
Assume you offer a free trial with LLM capabilities. There’s a very real cost associated with multiple signup abuse. You can card capture or KYC, but now there’s more friction and greater loss of privacy.
Attracting new monthly sponsors and people willing to buy me the occasional pizza with my really bad HTML skills.
What feels really cool is using the framework to build framework, I've got 150 ai generated commits stacked on top of each other, you can see every prompt and accepted solution in the dev logs I produce: https://github.com/sutt/agro/blob/master/docs/dev-summary-v1...
It is a platform for prompt challenges, in leetcode style. But you act as a director for the AI writing the code. You ask what you need and the AI implements your request.
The idea is to create some awareness about the fact that you do need to know how to code to be able to steer the AI in the right direction.
work in progress though, not much to show yet
My friend and my lender has built up a giant Google Sheet which he uses with his clients and I've been slowly working to translate the logic in that Sheet into an application. It's been a lot of fun as I've been learning how to replicate the multiplayer aspects of Sheets into a React application.
Our MCP server at work relays most of it's calls over to our API so we wanted to be sure something was testing and validating the whole flow.
The approach doesn't seem popular for professional plug-ins likely because it wasn't viable for real time until modern CPU enhancements became available. Performance scales with frequency of the input which is interesting and seems to be a consequence of using an iterative solver on a system of equations and using the previous sample's state vector as a guess for the current sample.
On my MacBook M3 it requires between 50 to 70% of a single core to produce a 2x oversampled output at 48000Hz. This can be scaled back by reducing the solution tolerance bounds and get down near 25% with minimal quality loss.
Here's a little announcement video I put together a week ago for an earlier version:
I have a python astrology library [1] which I use to generate my astrology chart, as text, to feed it into an LLM (Gemini 2.5 Pro). I then ask it questions about personal characteristics (the easy parts), but, more interesting, I can also ask it to consider some scenarios and it answers back with several hypothesis and how well it fits my character, personal goals, etc. It's like talking to a friend that knows you very very well.
Lately I've been working with a technique called Primary Directions. It's a predictive technique that tries to describe events in your life by means of astrological symbols (things like "Opposition of Saturn reaches the MC by 34.5 years old", which means something "bad" for your career or social position) and use it to check if a specific scenario has worked for me previously and will work in the future, and to ask it for other scenarios that match my personal characteristics and predicted symbolic events. I find LLMs, specifically Gemini Pro, quite good at these kind of things.
I also have fun "playing" with other peoples charts. For instance last night I gave it my chart and list of primary directions to Gemini and asked it if it could find who it was. It said Kurt Cobain. Quite off! But it described a lot of events that could fit my primary directions, like for instance, that Kurt Cobain got his guitar at his 14's or the one at 27 where he died. I didn't die at 27 (but "life issues") and also got my first guitar at 14. I'm also a musician, although an amateur one.
If you're into these kind of things, I created a gist [2] that you can feed into an LLM and talk to Kurt Cobain's chart. Note that it doesn't mention anywhere that it's Kurt Cobain. For fun, ask it something like "Considering the chart and the events predicted by the primary directions, in which ages will this person have some success or public visibility". In my case it answered, among others, 13-14 yo something related to "success, popularity, academic, sports or artistic achievement" (Kurt Cobain seems to got his first guitar at 14, and discovered his vocation), and 23-24 "beginning of career, marriage, or first step that puts the person on the 'map'" (release of the Nevermind album that catapulted Nirvana to the world stage). You can then ask it to match the events to Kurt Cobain, and it will find the real life events that seem to match it quite well.
I find that LLMs are quite good at generating hypothesis, multiple scenarios, and I'm still exploring their strengths and weaknesses (and of astrology as well).
[1] https://github.com/flatangle/flatlib/
[2] https://gist.github.com/joaoventura/68e0aed7c49c389347df98ec...
If you have more money than you know what to do with and want to support queer artists then https://comradery.co/egypturnash and https://www.patreon.com/egypturnash are great ways to funnel some of your money into my wallet :)
The AI Typing Application
It's a bit of clusterfuck, because my earlier clone was made in C#. And Rust enforces you to learn some different tricks, which are expands your knowledge and fun to learn.
So I built Stoodious as a study guide platform that intends to give you the material and get out of your way, as opposed to the engagement-driving gamification of others. One of the killer features is being able to drill practice questions related to the specific study section you're working on - e.g. 20-30 questions about water rights, encumbrances or calculating GRM. Frequently my wife found herself studying vocab for a section but having to skim through a 100-question practice test to find related questions.
I extended the material to all 50 states' basic real estate licensing exam and am looking to add even more "professional exam" material.
And here's my tech notes - https://www.jasonfletcher.info/vjloops/insectoid-protocol.ht...
I am keeping very simple to learn principles of web development, as I am very struggling in frontend.
The intention is to specialize away interpretive and meta-programming-like overhead, and then translate the result into usable source code in whichever language is desired. If the code, after specialization, has a suitable form, then you should get conventional imperative code out. If there are still functional-idioms present that haven't been specialized away, the generated code will have a more functional-style. The specialator currently targets JS and C.
The language has an expressive type-system based on self-dependent sub-typing. The type-system is too expressive for everyday use, it is intended to be used with a decidability checker.
This is an experimental work-in-progress.
Spent some time last month improving array handling + error messages and UX and adding an MCP server option; Claude does pretty well already but there's some syntax/error tweaks to make it simpler for it and humans.
Then pivoting back into scheduling + materialization optimizations (identify common aggregates across several scripts and automatically build the common datasets for reuse).
Right now I am focused on building a full app for readers where the books they log feed into recommendations based on a ton of variables...
I finished writing most of the story and am now working on implementing the main enemy's AI.
I'm having quite some fun with studying and making a schooling algorithm that fits my needs.
I considered leveraging LLMs to help write some parts the scenario, but in the end I prefer doing it myself.
When I said "enemy's AI", I didn't mean machine learning, just how it behaves.
Github: https://github.com/felipevolpatto/genesis Page: felipevolpatto.github.io/genesis/
I’ve been hacking away at Jarvis on nights and weekends for the past few months. I found myself increasingly frustrated with the constant context-switching involved in using ChatGPT through a browser for quick AI tasks. So, I decided to build a native voice-command layer that works seamlessly within any app you’re currently using.
It’s still pretty early, but I set up a frictionless web demo at jarvis.ceo/demo, where you can test it instantly without installation or sign-up. I’d love your feedback!
Building, testing and improving based on feedback of several startups that integrated it. Also using Overcentric for Overcentric itself, so I always get ideas for improvement.
What's next: more tools that are useful for startups are on the roadmap and I am exploring how LLMs can be further utilised (apart from support, session replay summaries, aiding in writing help center articles, integration) and refining pricing.
Also working on an improved landing page, but you can check out the current one at: https://overcentric.com/
I wanted three things Pocket didn’t offer:
1. A search-first experience
2. Integration with tools I already use (like Telegram)
3. A way to actually review saved content, with help from LLMs and spaced repetition
Still in beta, but usable. Would love to hear thoughts or your experience on building a product like this.
site: https://michelindb.jerrynsh.com/michelin
and I wrote about my journey here: https://jerrynsh.com/building-what-michelin-wouldnt-its-awar...
mmarian•1d ago
Might start doing a few posts on Cloudflare WAF as I've been working with it extensively lately. Maybe it'll help me uncover some startup ideas in that space.