https://www.inclusivecolors.com/
- You can precisely tweak every shade/tint so you can incorporate your own brand colors. No AI or auto generation!
- It helps you build palettes that have simple to follow color contrast guarantees by design e.g. all grade 600 colors have 4.5:1 WCAG contrast (for body text) against all grade 50 colors, such as red-600 vs gray-50, or green-600 vs gray-50.
- There's export options for plain CSS, Tailwind, Figma, and Adobe.
- It uses HSLuv for the color picker, which makes it easier to explore accessible color combinations because only the lightness slider impacts the WCAG contrast. A lot of design tools still use HSL, where the WCAG contrast goes everywhere when you change any slider which makes finding contrasting colors much harder.
- Check out the included example open source palettes and what their hue, saturation and lightness curves look like to get some hints on designing your own palettes.
It's probably more for advanced users right now but I'm hoping to simplify it and add more handholding later.
Really open to any feedback, feature requests, and discussing challenges people have with creating accessible designs. :)
There's so much more to do with tools like this, and I'm really glad to see it.
My partner shares our journey on X (@hustle_fred), while I’ve been focused on building the product (yep, the techie here :). We’re excited to have onboarded 43 users in our first month, and we're looking forward to getting feedback from the HN community!
There are some Amish people who rebuild Dewalt, Milwaukee etc battery packs. I'd like a repairable/sustainable platform where I can actually check the health of the battery packs and replace worn out cells as needed.
To give you an idea of the market, original batteries are about $149, and their knockoffs are around $100.
Battery-powered hand tools are heavier, clumsier, generally of lower quality, less power and are less long-lived than AC-powered tools.
To be honest, there's a little Amish in me: I have hand-powered tools as backup for all my AC tools.
I've been wondering for a while if the display on ebikes could also be a more open and durable part of it.
Building a new layer of hyper-personalization over the web. Instead of generating more content, it helps you reformat and interact with what already exists, turning any page, paper, or YouTube video into a summary, mind-map, podcast, infographic or chat.
The broader idea is to make the web adaptive to how each person thinks and learns.
We have a fun group working on it on Discord (find the discord invite in the How To)
- No sign-up, works entirely in-browser
- Live PDF preview + instant download
- VAT EU support
- Shareable invoice links
- Multi-language (10+) & multi-currency
- Multiple templates (incl. Stripe-style)
- Mobile-friendly
GitHub: https://github.com/VladSez/easy-invoice-pdf
Would love feedback, contributions, or ideas for other templates/features.
https://github.com/VladSez/easy-invoice-pdf/blob/main/LICENS...
I'm putting a bunch of security tools / data feeds together as a service. The goal is to help teams and individuals run scans/analysis/security project management for "freemium" (certain number of scans/projects for free each month, haven't locked in on how it'll pan out fully $$ wise).
I want to help lower the technical hurdles to running and maintaining security tools for teams and individuals. There are a ton of great open source tools out there, most people either don't know or don't have the time to do a technical deep dive into each. So I'm adding utilities and tools by the day to the platform.
Likewise, there's a built in expert platform for you to get help on your security problems built into the system. (Currently an expert team consisting of [me]). Longer term, I'm working on some AI plugins to help alert on CVEs custom to you, generate automated scans, and some other fun stuff.
https://meldsecurity.com/ycombinator (if you're interested in free credits)
AI sprite animator for 2D video games.
It is a tool that lets you create whiteboard explainers.
You can prompt it with an idea or upload a document and it will create a video with illustrations and voiceover. All the design and animations are done by using AI apis, you dont need any design skills.
Here is a video explainer of the popular "Attention is all you need" paper.
https://www.youtube.com/watch?v=7x_jIK3kqfA
Would love to hear some feedback
The animations / drawings themselves are solid too. I think there's more to play with wrt the dimensions and space of the background. It would be nice to see it zoom in and out for example.
how does it work with long papers? will it ever work with small books?
will try it out tomorrow again
yes it should work.
> i can’t upload the document
Could you please drop an email to rahul at magnetron dot ai with the document. I will set things up for you
My first career was in sales. And most of the time these interactions began with grabbing a sheet of paper and writing to one another. I think small LLMs can help here.
Currently making use of api’s but I think small models on phones will be good enough soon. Just completed my MVP.
last month’s “what are you working on” thread impulsed me to upload this game to itch and 1 month later, i’ve got a small community, lots of feedback and iterations. It brought a whole new life to a project that was on the verge of abandoning.
So, I’m really grateful for this thread. https://explodi.itch.io/microlandia
Write a dev blog in Word format using Tritium, jot down bugs or needs, post blog, improve and repeat.
Next in the plans is adding more models and compare which one gives better results.
Drones are real bastards - there's a lot of startups working on anti drone systems and interceptors, but most of them are using synthetic data. The data I'm collecting is designed to augment the synthetic data, so anti drone systems are closer to field testing
Some are small tech jokes, while others were born from curiosity to see how LLMs would behave in specific scenarios and interactions.
I also tried to use this collection of experiments as a way to land a new job, but I'm starting to realize it might not be serious enough :)
Happy to hear what you think!
https://github.com/skanga/Conductor
Conductor is a LLM agnostic framework for building sophisticated AI applications using a subagent architecture. It provides a robust platform for orchestrating multiple specialized AI agents to accomplish complex tasks, with features like LLM-based planning, memory persistence, and dynamic tool use.
It provides a robust and flexible platform for orchestrating multiple specialized AI agents to accomplish complex tasks. This project is inspired by the concepts outlined in "The Rise of Subagents" by Phil Schmid at https://www.philschmid.de/the-rise-of-subagents and it aims to provide a practical implementation of this powerful architectural pattern.
Working on faceted search for logs and CLI client now and trying to share my progress on X.
Last year, PlasticList found plastic chemicals in 86% of tested foods—including 100% of baby foods they tested. Around the same time, the EU lowered its “safe” BPA limit by 20,000×, while the FDA still allows levels roughly 100× higher than Europe’s new standard.
That seemed solvable.
Laboratory.love lets you crowdfund independent lab testing of the specific products you actually buy. Think Consumer Reports × Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid’s snacks, or whatever you’re curious about.
Find a product (or suggest one), contribute to its testing fund, and get full lab results when testing completes. If a product doesn’t reach its goal within 365 days, you’re automatically refunded. All results are published publicly.
We use the same ISO 17025-accredited methodology as PlasticList.org, testing three separate production lots per product and detecting down to parts-per-billion. The entire protocol is open.
Since last month’s “What are you working on?” post:
- 4 more products have been fully funded (now 10 total!)
- That’s 30 individual samples (we do triplicate testing on different batches) and 60 total chemical panels (two separate tests for each sample, BPA/BPS/BPF and phthalates)
- 6 results published, 4 in progress
The goal is simple: make supply chains transparent enough that cleaner ones win. When consumers have real data, markets shift.
Browse funded tests, propose your own, or just follow along: https://laboratory.love
It's interesting that a bunch of the funded products have been funded by a single person.
Do you know if it's the producers themselves? Worried rich people?
I've yet to have any product funded by a manufacturer. I'm open to this, but I would only publish data for products that were acquired through normal consumer supply chains anonymously.
2. If you find regulation-violating (or otherwise serious) levels of undesirable chemicals, do you... (a) report it to FDA; (b) initiate a class-action lawsuit; (c) short the brand's stock and then news blitz; or (d) make a Web page with the test results for people to do with it what they will?
3. Is 3 tests enough? On the several product test results I clicked, there's often wide variation among the 3 samples. Or would the visualization/rating tell me that all 3 numbers are unacceptably bad, whether it's 635.8 or 6728.6?
4. If I know that plastic contamination is a widespread problem, can I secretly fund testing of my competitors' products, to generate bad press for them?
5. Could this project be shut down by a lawsuit? Could the labs be?
For example, there are two individuals who own the same $100k machine for testing the performance of loudspeakers.
https://www.audiosciencereview.com/forum/index.php
https://www.erinsaudiocorner.com/
Both of them do measurements and YouTube videos. Neither one has a particularly good index of their completed reviews, let alone tools to compare the data.
I wish I could subscribe to support a domain like “loud speaker spin tests” and then have my donation paid out to these reviewers based on them publishing new high quality reviews with good data that is published to a common store.
a tool to help California home owners to lower their property taxes. This works for people who bought in the past years low interest environment and are overpaying in taxes because of that.
Feel free to email me, if you have questions: phl.berner@gmail.com
https://apu.software/truegain/
Then it’s on to the next project.
To provide trading insights for users.
I started this out of frustration that there is no good tool I could use to share photos from my travel and of my kids with friends and family. I wanted to have a beautiful web gallery that works on all devices, where I can add rich descriptions and that I could share with a simple link.
Turned out more people wanted this (got 200+ GitHub stars for the V1) so I recently released the V2 and I'm working on it with another dev. Down the road we plan a SaaS offer for people that don't want to fiddle with the CLI and self-host the gallery.
The insight: your architecture diagram shouldn't be a stale PNG in Confluence. It should be your war room during incidents.
Going to be available as both web app and native desktop.
For example, 1 PCR reaction (a common reaction used to amplify DNA) costs about $1 each, and we're doing tons every day. Since it is $1, nobody really tries to do anything about it - even if you do 20 PCRs in one day, eh it's not that expensive vs everything else you're doing in lab. But that calculus changes once you start scaling up with robots, and that's where I want to be.
Approximately $30 of culture media can produce >10,000,000 reactions worth of PCR enzyme, but you need the right strain and the right equipment. So, I'm producing the strain and I have the equipment! I'm working on automating the QC (usually very expensive if done by hand) and lyophilizing for super simple logistics.
My idea is that every day you can just put a tube on your robot and it can do however many PCR reactions you need that day, and when the next day, you just throw it out! Bring the price from $1 each to $0.01 + greatly simplify logistics!
Of course, you can't really make that much money off of this... but will still be fun and impactful :)
Some things that would be cool
- Along your lines: In general, cheap automated setups for PCR and gels
- Cheap/automatic quantifiable gels. E.g. without needing a kV supply capillary, expensive QPCR machines etc.
- Cheaper enzymes in general
- More options for -80 freezers
- Cheaper/more automated DNA quantification. I got a v1 Quibit which gets the job done, but new ones are very expensive, and reagent costs add up.
- Cheaper shaking incubator options. You can get cheap shakers and baters, but not cheap combined ones... which you need for pretty much everything. Placing one in the other can work, but is sub-optimal due to size and power-cord considerations.
- More centrifuges that can do 10kG... this is the minimum for many protocols.
- Ability to buy pure ethanol without outrageous prices or hazardous shipping fees.
- Not sure if this is feasible but... reasonable cost machines to synthesize oglios?Haunted house trope, but it's a chatbot. Not done yet, but it's going well. The only real blocker is that I ran into the parental controls on the commercial models right away when trying to make gory images, so I had to spin up my own generators. (Compositing by hand definitely taking forever).
- 30k requests/month for free
- simple, stable, and fast API
- MCP Server for AI-related workloads
It’s an iOS app to help tracking events and stats about my day as simple dots. How many cups of coffee? Did I take my supplements? How did I sleep? Did I have a migraine? Think of it like a digital bullet journal.
Then visualizing all those dots together helps me see patterns and correlations. It’s helped me cut down my occurrence of migraines significantly. I’m still just in the public beta phase but looking forward to a full release fairly soon.
Would love to hear more feedback on how to improve the app!
It's already working, and slightly faster than the CPU version, but that's far from an acceptable result. The occupancy (which is a term I first learned this week) is currently at a disappointing 50%, so there's a clear target for optimisation.
Once I'm satisfied with how the code runs on my modest GPU at home, the plan is to use some online GPU renting service to make it go brrrrrrrrrr and see how many new elements I can find in the series.
One of the best I’ve seen in this thread!
Good luck with your mission!
man, myself needs work
https://github.com/jakeroggenbuck/kronicler
This is why I wrote kronicler to record performance metrics while being fast and simple to implement. I built my own columnar database in Rust to capture and analyze these logs.
To capture logs, `import kronicler` and add `@kronicler.capture` as a decorator to functions in Python. It will then start saving performance metrics to the custom database on disk.
You can then view these performance metrics by adding a route to your server called `/logs` where you return `DB.logs()`. You can paste your hosted URL into the settings of usekronicler.com (the online dashboard) and view your data with a couple charts. View the readme or the website for more details for how to do this.
I'm still working on features like concurrency and other overall improvements. I would love some feedback to help shape this product into something useful for you all.
Thanks! - Jake
Still working on growing the audience.
Last month was an improvement. This month I can't concentrate for long and I distract very easily, but I seem to be able to do more with what I have, A small sense of ambition that I might be able to do bigger things, and might not need to drop out of tech and get a simple job, is returning.
I am trying to use this inhibited, fractured state to clarify thoughts about useless technology and distractions, and about what really matters, because (without wishing to sound haughty) I used to be unusually good at a lot of tech stuff, and now I am not. It is sobering but it is also an insight into what it might be like to be on the outside of technology bullshit, looking in.
I'm calling it a "Micro Functions as a Service" platform.
What it really is, is hosted Lua scripts that run in response to incoming HTTP requests to static URLs.
It's basically my version of the old https://webscript.io/ (that site is mostly the same as it was as long as you ignore the added SEO spam on the homepage). I used to subscribe to webscript and I'd been constantly missing it since it went away years ago, so I made my own.
I mostly just made this for myself, but since I'd put so much effort into it, I figure I'm going to try to put it out there and see if anyone wants to pay me to use it. Turns out there's a _lot_ of work that goes into abuse prevention when you're code from literally anyone on the internet, so it's not ready to actually take signups yet. But, there is a demo on the homepage.
- A front-end library that generates 10kb single-html-file artifacts using a Reagent-like API and a ClojureScript-like language. https://github.com/chr15m/eucalypt
- Beat Maker, an online drum machine. I'm adding sample uploads now with a content accessible storage API on the server. https://dopeloop.ai/beat-maker
- Tinkering with Nostr as a decentralized backend for simple web apps.
This month doubling down on a small house cleaning business that I acquired https://shinygoclean.com
Instead of code, seems like SOPs have become new love language!
Code obeys logic. People obey trust. That’s the real debugging. Still learning!
https://github.com/scallyw4g/bonsai
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
It has some rough edges, but I use it a ton and get a lot of value out of it.
I just took Qwen-Image and Google’s image AIs for a spin and I keep a side by side comparison of many of them.
https://generative-ai.review/2025/09/september-2025-image-ge...
and I evaluated all the major 3D Asset creators:
https://generative-ai.review/2025/08/3d-assets-made-by-genai...
AppGoblin is a free place to do app research for understanding which apps use which companies to monetize, track where data is sent and what kinds of ads are shown.
I want to write voip plugins using a modern tool chain and benefit from the wider crate eco system
It’s got the base instruction set implemented and working. A CRT shader, resizable display, and swappable color palettes.
I’m working on sound and a visual debugger for it.
I have some work to do on the Haskell TigerBeetle client and the Haskell postgresql logical replication client library I wrote too.
(But also just launched https://ChessHoldEm.net this weekend)
right now, it’s a better way to showcase your really specific industry skills and portfolio of 3D assets (i.e., “LinkedIn for VR/XR) with hiring layered on
starting to add onto the current perf analysis tools and think more about how to get to a “lovable for VR/XR”
And an agentic news digest service which scrapes a few sources (like HackerNews) for technical news and create a daily digest, which you can instruct and skew with words.
I am building a tool that gives automated qualitative feedback on websites. This is the early and embarrassing MVP: https://vibetest-seven.vercel.app/product
You provide your URL and an LLM browses your site and writes up feedback. Currently working on increasing the quality of the feedback. Trying to start with a narrower set of tests that give what I think is good feedback, then increase from there.
If a tool like this analyzed your website, what would you actually want it to tell you? What feedback would be most useful?
Nice to call it feature complete and move on!
It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).
It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
I'm a robotics engineer by training, this is my first public launch of a web app.
Try it: https://app.veila.ai (free tier, no email required)
- What it is:
- Anonymous AI chat via a privacy proxy (provider sees our server, not your IP or account info)
- End‑to‑end encrypted history, keys derived from password and never leave your device
- Pay‑as‑you‑go; switch models mid‑chat (OpenAI now; Claude, Gemini and others planned)
- Practical UX: sort chats into folders, Markdown, copyable code blocks, mobile‑friendly
- Notes/limits:
- Not self‑hosted: prompts go to third‑party APIs
- If you include identifying info, upstream sees it
- Prompts take a bit long sometimes, because reasoning is set to "medium" for now. Plan to make this adjustable in the future.
- Looking for feedback:
- What do you need to trust this? Open source? Independent audit?
- Gaps in the threat model I'm missing
- Which UI features and AI models you'd want next
- Any UX rough edges (esp. mobile)
- Learn more:
- Compare Veila to ChatGPT, Claude, Gemini, etc. (best viewed on desktop): https://veila.ai/docs/compare.html
- Discord: https://discord.gg/RcrbZ25ytb
- More background: https://veila.ai/about.html
Homepage: https://veila.aiHappy to answer any questions.
In this space, it is more about trust and what you have done in the past more than anything else. Audits and whatnot are nice, but I need to be able to trust that your decisions will be sound. Think how Steam's Gabe gained his reputation. Not exactly easy feat these days.
FWIW, favorited for testing.
I'd love to hear your feedback if you get around to test Veila, e.g. on hey@veila.ai.
I just released the changelog 5 minutes ago https://intrasti.com/changelog which I went with a directory based approach using the international date format YYYY-MM-DD so in the source code it's ./changelog/docs/YYYY/MM/DD.md - seems to do the trick and ready for pagination which I haven't implemented yet.
It’s been a fun, practical way to continuously evaluate the latest models two ways - via coding assistance & swapping between models to power the conversational AI voice partner. I’ve been trying to add one big new feature each time the model generation updates.
The next thing I want to add is a self improving feedback loop where it uses user ratings of the calls & evaluations to refine the prompts that generate them.
Plus it has a few real customers which is sweet!
Besides the LLM experimentation, this project has allowed me to dive into interesting new tech stacks. I'm working in Hono on Bun, writing server-side components in JSX and then updating the UI via htmx. I'm really happy with how it's coming together so far!
[0] https://github.com/stryan/materia and/or https://primamateria.systems/
Beyond that, just regular random stuff that comes up here and there, but, for once, my hdd with sidelined projects is slowly being worked through.
The goal was to make the learning material very malleable, so all content can be viewed through different "lenses" (e.g. made simpler, more thorough, from first principles, etc.). A bit like Wikipedia it also allows for infinite depth/rabbit holing. Each document links to other documents, which link to other documents (...).
I'm also currently in the middle of adding interactive visualizations which actually work better than expected! Some demos:
We're pretty jazzed.
It makes tricky functions like torch.gather and torch.scatter more intuitive by showing element-level relationships between inputs and outputs.
For any function, you can click elements in the result to see where they came from, or elements in the inputs to see how they contribute to the result to see exactly how it contributes to the result. I found that visually tracing tensor operations clarifies indexing, slicing, and broadcasting in ways reading that the docs can't.
You can also jump straight to WhyTorch from the PyTorch docs pages by modifying the base URL directly.
I launched a week or two back and now have the top post of all time on r/pytorch, which has been pretty fun.
It's for doing realtime "human cartography", to make maps of who we are together in complex large-scale discourse (even messy protest).
https://patcon.github.io/polislike-human-cartography-prototy...
Newer video demo: https://youtu.be/C-2KfZcwVl0
It's for exploring human perspective data -- agree, disagree, pass reactions to dozens or hundreds of belief statements -- so we can read it as if it were Google Maps.
My operating assumption is that if a critical mass of us can understand culture and value clashes as mere shapes of discourse, and we can all see it together, the we can navigate them more dispassionately and with clear heads. Kinda like reading a map or watching the weather report -- islands that rise from oceans, or plate tectonics that move like currents over months, and terraform the human landscape -- maybe if we can see these things together, we'll act less out of fear of fun-house caricatures. (E.g., "Hey, dad, it seems like the peninsula you're on is becoming a land bridge toward the alt right corner. I feel a little bummed about that. How do you feel about it?")
(It builds on data and the mathematical primitives of a great tool called Pol.is, which I've worked with for almost a decade.)
Experimental prototype of animating between projections: https://main--68c53b7909ee2fb48f1979dd.chromatic.com/iframe.... (advanced)
We were featured on our local NPR syndicate which is neat: https://laist.com/news/los-angeles-activities/new-grassroots...
Since this is hackernews, i'll add that i'm building the website and archiving system using haskell and htmx, but what is currently live is a temp static html site. https://github.com/solomon-b/kpbj.fm
This might be a naive question which you've probably been asked plenty of times before so I'm sorry of I'm being tedious here.
Is it really worth the effort and expense to have a real radio station these days? Wouldn't an online stream be just as effective if it was promoted well locally?
A few years ago a friend who was very much involved in a local community group which I was also somewhat interested in asked me if I wanted to help build a low power FM station. He asked me because I know something about radio since I was into ham radio etc.
I was skeptical that it was worth the effort. The nerdy part of me would have enjoyed doing it but I couldn't help thinking that an online stream would probably reach as many people without the hassle and expensive of a transmitter, antenna etc.
I know it's a toss up. Every car has an FM radio. Not everyone is going to have a phone plugged in to Android Auto or Apple Car Play and have a good data plan and have a solid connection.
I also pointed out that the technical effort is probably the small part compared to producing interesting content.
I was motivated to build this as I found that many great personal finance and budget apps didn't offer integrations with the banks I used, which is understandable given the complexity and costs involved, so I wanted to tackle this problem and help build the missing open banking layer for personal finance apps, with very low costs (a few dollars a month) and a very simple api, or built-in integrations.
Still working on making this sustainable, but been quite a learning experience so far, and quite excited to see it already making a difference for so many people :)
On-site surveys for eCommerce and SaaS. It's been an amazing ride leveling up back and forth between product, design, and marketing. Marketing is way more involved than most people on this site realize...
This is a free license plate tracking game for families on road trips. Currently adding more OAuth providers, and some time zone features.
It runs fully on-device, including email classification and event extraction
Building desktop environment in the cloud with built in cloud storage, AI, processing, app ecosystem and much more!
A simple document translator that preserves your file's formatting and layout.
Merchants who want to sell on Etsy or Shopify either have to pay a listing fee or pay per month just to keep an online store on the web. Our goal is to provide a perpetually free marketplace that is powered solely off donations. The only fees merchants pay are the Stripe fees, and it's possible that at some volume of usage we will be able to negotiate those down.
You can sell digital goods as well as physical goods. Right now in the "manual onboarding" phase for our first batch of sellers.
For digital goods, purchasers get a download link for files (hosted on R3).
For physical goods, once a purchase comes through, the seller gets an SMS notification and a shipping label gets created. The buyer gets notified of the tracking number and on status changes.
We use Stripe Connect to manage KYC (know your customer) identities so we don't store any of your sensitive details other than your name and email. Since we are in the process of incorporating as a 501(c)(3) nonprofit, we are only serving sellers based in the United States.
The mission of the company is to provide entrepreneurial training to people via our online platform, as well as educational materials to that aim.
I want to be able to script prices, product descriptions, things like that. And see them show up in a request on sale.
I believe the old internet is still alive and well. Just harder to find now.
People won't read and skim all of those CTA, instead trie to give them an "aha, interesting" asap.
You can read more about it and watch a demo: https://blog.with.audio/posts/web-reader-tts
I buit this to get some traffic to my main project's website using a free tool people might like. The main project: https://desktop.with.audio -> a one time payment text to speech app with text highlighting and export mp3 and other features on MacOS (ARM only) and Windows.
We’re working directly with partner housing unions and charities in Britain and Ireland to build the first central database of rogue landlords and estate agents. Users can search an address and see if it’s marked as rogue/dangerous by the local union, as well as whether you can expect to see your deposit returned, maintenance, communication - etc.
After renting for close to a decade, it’s the same old problems with no accountability. We wanted to change this, and empower tenants to share their experiences freely and easily with one another.
We’re launching in November, and I’m very excited to announce our partner organisations! We know this relies on a network effect to work, and we’re hoping to run it as a social venture. I welcome any feedback.
Take a picture of an event flyer or paste in some text. The event gets added to your calendar.
-----
COCKTAIL-DKG - A distributed key generation protocol for FROST, based on ChillDKG (but generalized to more elliptic curve groups) -- https://github.com/C2SP/C2SP/pull/164 | https://github.com/C2SP/C2SP/issues/159
-----
A tool for threshold signing software releases that I eventually want to integrate with SigStore, etc. to help folks distribute their code-signing. https://github.com/soatok/freeon
-----
Want E2EE for Mastodon (and other ActivityPub-based software), so you can have encrypted Fediverse DMs? I've been working on the public key transparency aspect of this too.
Spec: https://github.com/fedi-e2ee/public-key-directory-specificat...
Implementation: Coming soon. The empty repository is https://github.com/fedi-e2ee/pkd-server-go but I'll be pushing code in the near future.
You can read more about this project here: https://soatok.blog/category/technology/open-source/fedivers...
It's been a great project to understand how design depends on a consistent narrative and purpose. At first I put together elements I thought looked good but nothing seemed to "work" and it's only when I took a step back and considered what the purpose and philosophy of the design was that it started to feel cohesive and intentional.
I'll never be a designer but I often do side projects outside my wheelhouse so I can build empathy for my teammates and better speak their language.
(It's a frontend to make searching eBay actually pleasant)
I started using it like tool call in Security scanning (think of something like claude-code for security scanning)
Give it a read if you're interested:
https://codepathfinder.dev/blog/codeql-oss-alternative/
https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
Happy to discuss!
So I started working on Librario, an ISBN database that fetches information from several other services, such as Hardcover.app, Google Books, and ISBNDB, merges that information, and return something more complete than using them alone. It also saves that information in the database for future lookups.
You can see an example response here[1]. Pricing information for books is missing right now because I need to finish the extractor for those, genres need some work[2], and having a 5 months old baby make development a tad slow, but the service is almost ready for a preview.
The algorithm to decide what to merge is the hardest part, in my opinion, and very basic right now. It's based on a priority and score system for now, where different extractors have different priorities, and different fields have different scores. Eventually, I wanna try doing something with machine learning instead.
I'd also like to add book summaries to the data somehow, but I haven't figured out a way to do this legally yet. For books in the public domain I could feed the entire book to an LLM and ask them to write a spoiler-free summary of the book, but for other books, that'd land me in legal trouble.
Oh, and related books, and things of the sort. But I'd like to do that based on the information stored in the database itself instead of external sources, so it's something for the future.
Last time I posted about Shelvica some people showed interest in Librario instead, so I decided to make it something I can sell instead of just a service I use in Shelvica[3], hence why I'm focusing more on it these past two weeks.
[1]: https://paste.sr.ht/~jamesponddotco/de80132b8f167f4503c31187...
[2]: In the example you'll see genres such as "English" and "Fiction In English", which is mostly noise. Also things like "Humor", "Humorous", and "Humorous Fiction" for the same book.
[3]: Which is nice, cause that way there are two possible sources of income for the project.
Funny thing is, the advisor started to tell me to sell last week, and so I did. Then last Friday happened. Interesting.
It's a browser extension right now and the platform integrates with SSO providers and AI APIs, to help discover shadow AI, enforce policies and creates audit trails. Think observability for AI adoption but also Grammerly since we help coach endusers to better behavior/outcomes.
Early days but the problem is real, have a few design partners in the F500 already
Basically, think of it as "Pokemon the anime, but for real". We allow you to use your voice to talk to, command, and train your monster. You and your monster are in this sandbox-y, dynamic environment where your actions have side effects.
You can train to fight or just to mess around.
Behind the scenes, we are converting player's voice into code in real time to give life to these monsters.
If you're interested, reach out!
I have been trying to study Chinese on my own for a while now and found it very frustrating to spend half the time just looking for simple content to read and listen to. Apps and websites exist, but they usually only have very little content or they ramp up the difficulty too quickly.
Now that LLMs and TTS are quite good I wanted to try it out for languages learning. The goal is to create a vast number of short AI-generated stories to bridge the gap between knowing a few characters and reading real content in Chinese.
Curious to see if it is possible to automatically create stories which are comfortable to read for beginners, or if they sound too much like AI-slop.
I'm trying to use this to create stories that would be somewhat unreasonable to write otherwise. Branching stories (i.e., CYOA), multiperspective stories, some multimedia. I'm still trying to figure out the narrative structures that might work well.
LLMs can overproduce and write in different directions than is reasonable for a regular author. Though even then I'm finding branching hard to handle.
The big challenges are rhythm, pacing, following an arc. Those have been hard for LLMs all along.
The solution? Have the cartridge keep track of CPU parity (there's no simple way to do this with just the CPU), then check that, skip one cycle if needed... and very carefully cycle time the rest of the routine, making sure that your reads land on safe cycles, and your writes land in places that won't throw off the alignment.
But it works! It's quite reliable on every console revision I've thrown it at so far. Suuuper happy with that.
https://github.com/olooney/jellyjoin
It hits a sweet spot by being easier to use than record linkage[3][4] while still giving really good matches, so I think there's something there that might gain traction.
[1]: https://platform.openai.com/docs/guides/embeddings
[2]: https://en.wikipedia.org/wiki/Hungarian_algorithm
Thinking about: A new take on LinkedIn/web-of-trust, bootstrapped by in-person interactions with devices. It seems that the problem of proving who is actually human and getting a sense of how your community values you might be getting more important, and now devices have some new tools to bring that within reach.
The goal is to provide a fully typed nodeJS framework that allows you to write a typescript function once and then decide whether to wire it up to http, websocket, queues, scheduled tasks, mcp server, cli and other interactions.
You can switch between serverless and server deployments without any refactoring / completely agnostic to whatever platform your running it on
It also provides services, permissions, auth, eventhub, advanced tree shaking, middleware, schema generation and validation and more
The way it works is by scanning your project via the typescript compiler and generating a bootstrap file that imports everything you need (hence tree shaking), and allows you to filter down your backend to only the endpoints needed (great to pluck out individual entry points for serverless). It also generates types fetch, rpc, websocket and queue client files. Types is pretty much most of what pikku is about.
Think honoJS and nestJS sort of combined together and also decided to support most server standards / not just http.
Website needs love, currently working on a release to support CLI support and full tree shaking.
It clearly supports different runtimes than node with different capabilities and limitations.
It seems more of a runtime-agnostic web server.
https://github.com/RoyalIcing/Orb
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
https://jsassembler.fly.dev/ https://csharpassembler.fly.dev/ https://goassembler.fly.dev/ https://rustassembler.fly.dev/ https://nodeassembler.fly.dev/ https://phpassembler.fly.dev/
The purpose is to find if can i build declarative software in multiple langauges (Rust, Go, Node.Js, PHP and Javascript) knowing only one language (C#) without understanding the implementation deeply.
Another purpose is validate AI models and their efficiency since development using AI is hard but highly productive and having a declarative rules to recreate the implementation may be used to validate models
Currently i am convinced it is possible to build, but now working on creating a solid foundation with tests of the two assembler engines, structure dumps, logging, logging outputs so that those can be used by the AI which it needs to fix issues iteratively.
Need to add more declarative rules and implement a full stack web assembler to see if AI will hit the technical debt which slows/stop progress. Only time will tell.
It's an API that allows zero-knowledge proofs to be generated in a streaming fashion, meaning ZKPs that use way less RAM than normal.
The goal is to let people create ZKPs of any size on any device. ZKPs are very cool but have struggled to gain adoption due to the memory requirements. You usually need to pay for specialized hardware or massive server costs. Hoping to help fix the problem for devs
It's mostly where I want it to be now, but still need to automate the ingest of USPTO data. I'd really like it to show a country flag on the search results page next to each item, but inferring the brand name just from the item title would probably need some kind of natural language processing; if there's even a brand in the title.
No support for their mobile layout. Do many people buy from their phone?
The goal is to catch vulnerabilities early in the SDLC by running agentic loop that autonomously hunt for security issues in codebases.Currently available as a CLI tool, VSCode extension.I've been actively using to scan WordPress, odoo plugins and found several privilege escalation vuln. I have documented as blog post here: https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
It can process a set of 3-hour audio files in ~20 mins.
I recorded a demo video of how it works here: https://www.youtube.com/watch?v=v0KZGyJARts&t=300s
[1] https://github.com/naveedn/audio-transcriber
I alluded to building this tool on a previous HN thread: https://news.ycombinator.com/item?id=45338694
https://github.com/westonwalker/primelit
Drawing a lot of inspiration from interval.com. It was an amazing product but was a hosted SAAS. I'm exploring taking the idea to the .NET ecosystem and also making it a Nuget package that can be installed and served through any ASP.NET project.
What I'm building at the moment is a server monitoring solution for STUN, TURN, MQTT, and NTP servers. I wanted to allow the software for this to be portable. So I wrote a simple work queue myself. Python doesn't have linked-lists which is the data structure I'm using for the queues. They allow for O(1) deletes which you can't really get on many Python data structures. Important for work items when you're moving work between queues.
For the actual workers I keep things very simple. I make like 100 independent Python processes each with an event loop. This uses up a crap load of memory but the advantage is that you can parallel execution without any complexity. It would be extremely complex trying to do that with code alone and asyncio's event loop doesn't play well with parallelism. So you really only want one per process.
Result: simple, portable Python code that can easily manage monitoring hundreds of servers (sorry didnt mean for that to sound like chatgpt, lmao, incidental.) The DB for this is memory-based to avoid locking issues. I did use sqlite at first but even with optimizations there were locking issues. Now, I only use sqlite for import / export (checksums.)
Not anything special by HN standards but work is here: https://github.com/robertsdotpm/p2pd_server_monitor
I'm at the stage now where I'm adding all the servers to monitor to it. So fun times.
It is a modified version of Shopify's CEO Tobi try implementation[0]. It extends his implementation with sandboxing capabilities and designed with functional core, imperative shell in mind.
I had success using it to manage multiple coding agents at once.
The idea is to enable a comment section on any webpage, right as you’re browsing. Viewing a Zillow listing? See what people are excited about with the property. Wonder what people think about a tourist attraction? It’ll be right there. Want to leave your referral or promo code on a checkout page for others? Post it.
Not sure what the business model will look like just yet. Just the kind of thing I wish existed compared to needing to venture out to a third party (traditional social media / forums etc) to see others’ thoughts on something I’m viewing online. I welcome any feedback!
The main idea is to bring as many of the agentic tools and features into a single cohesive platform as much as possible so that we can unlock more useful AI use-cases.
An agent that plugs into Slack and helps companies identify and remediate infrastructure cost-related issues.
Imagine your basic Excel spreadsheet -> generating document files, but add:
- Other sources like SQL queries - User form (e.g. "Generate documents for Client Category [?]") - Chaining sources in order like SQL queries with parameters based on the user form - Split at multiple points (5 records in a csv, 4 records in a sql result = 20 generated documents) - Full Jinja2 templating with field substitution but also if/for blocks that works nicely with .docx files - PDF output - output file names using the same templating: "/BusinessDrive/{{ client_id }}/Invoice - {{ invoice_id}}.pdf"
All saved in reproducible workflows (for example if you need to process a .csv file you receive each morning)
(It was supposed to be completed months ago but got stuck in other issues)
Here's the waitlist and proposal: https://waitlist-tx.pages.dev
Fitness Tools https://aretecodex.pages.dev/tools/
Fitness Guides https://aretecodex.pages.dev/
A lot of people often ask questions like: - How do I lose body fat and build muscle? - How can I track progress over time? - How much exercise do I actually need? - What should my calorie and macro targets be?
One of the most frequently asked questions in fitness forums is about cutting, bulking, or recomposition. This tool helps you navigate those decisions: https://aretecodex.pages.dev/tools/bulk-cut-recomposition-we...
We’ve also got a Meal Planner that generates meal ideas based on your calorie intake and macro split: https://aretecodex.pages.dev/tools/meal-plan-planner
Additionally, I created a TDEE Calculator designed specifically to prevent overshooting TDEE in overweight individuals: https://aretecodex.pages.dev/tools/tdee-calculator
For a deeper dive into the concept of TDEE overshoot in overweight individuals, check out this detailed post: https://www.reddit.com/r/AskFitnessIndia/comments/1mdppx5/in...
It's called lazyslurm - https://github.com/hill/lazyslurm
Would love feedback! <3
Since the last month, I have created a complete schematic with Circuitscript, exported the netlist to pcbnew and designed the PCB. The boards have been produced and currently waiting for them to be delivered to verify that it works. Quite excited since this will be the first design ever produced with Circuitscript as the schematic capture tool!
The motivation for creating Circuitscript is to describe schematics in terms of code rather than graphical UIs after using different CAD packages extensively (Allegro, Altium, KiCAD) in the past. I wanted to spend more time thinking about the schematic design itself rather than fiddling around with GUIs.
The main language goals are to be easy to write and reason, generated graphical schematics should be displayed according to how the designer wishes so (because this is also part of the design process) and to encourage code reuse.
Please check it out and I look forward to your feedback, especially from electronics designers/hobbyists. Thanks!
Just added health inspection data from countries that have that in open datasets (UK and Denmark). If anyone know of others I'd be appreciative of hints.
Thinking of focusing on another idea for the rest of the year, have a rough idea for a map based ui to structure history by geofences or lat / lng points for small local museums
I discovered that "least common ancestor" boils down to the intersection of 'root-path' sets, where you select the last item in the set as the 'first/least common ancestor'.
david927•4h ago