I'm working on Brain Hurricane (brainhurricane.ai). It's the kind of structured tool I wish I'd had in my career. I was tired of unstructured brainstorming sessions that recycled the same ideas and the passive waiting for a "great idea" that never arrives.
My goal was to create a systematic process. It uses AI to help you generate ideas with proven methods like SCAMPER and Six Thinking Hats, then immediately analyze them with frameworks like SWOT, PESTEL, and the Business Model Canvas. It's about moving from a fuzzy concept to a validated idea with more confidence and clarity.
On a personal level, this project was my way of diving headfirst into modern AI development. I'm building it with Next.js, TypeScript, Python, and Linux, which has been a fun and humbling experience coming from a more traditional enterprise stack.
It's still early, but the core features are live. I'd genuinely appreciate any feedback from the HN community, especially from those who have struggled to turn abstract ideas into something concrete.
Here's the clickable link for anyone interested: https://brainhurricane.ai
It has a rich free tier, simple API, and a client dashboard that is easy to use. I do my best to build a service that I would love to use as a software engineer.
The architecture is deliberately minimal: ZeroMQ based broker, coordinating worker nodes through a rather spartanic protocol that extends MajorDomo. Messages carry UUIDs for correlation, sender/receiver routing, type codes for context-dependent semantics and optional (but very much used) payloads. Pipeline definitions live in YAML files (as do worker and client configs) describing multi-step workflows with conditional routing, parallel execution, and wait conditions based on worker responses. Python is the language of the logic part.
I am trying to follow the "functional core, imperative shell" philosophy where each message is essentially an immutable, auditable block in a temporal chain of state transformations. This should enable audit trails, event sourcing, and potentially no-loss crash recovery. A built-in block-chain-like verification is something I'm currently researching and could add to the whole pipeline processing.
The hook system provides composable extensibility of all main user-facing "submodules" through mixin classes, so you only add complexity for features you actually need. The main pillars of functionality, the broker, the worker and the client, as well some others, are designed to be self contained monolithic classes (often breaking the DRY principle...), whose additional functionality is composed rather than inherited through mixins that add functionality while at the same time minimizing the amount of added "state capital" (accent on behaviour rather than state management). The user-definebale @hook("process_message"), @hook("async_init"), @hook("cleanup") etc. cross-cut into the lifecycle of each submodule and allow for simple functionality extension.
I'm also implementing a very simple distributed virtual file system with unixoid command patterns (ls, cd, cp, mv etc) supporting multiple backends for storage and transfer; i.e. you can simply have your data worker store files it subscribes to in a local folder and have it use either its SSH, HTTPS or FTPS backend to serve these on demand. The data transfers employ per file operation ephemeral credentials, the broker only orchestrates metadata message flow between sender and receiver of the file(s), the transfer happens between nodes themselves. THe broker is the ultimate and only source of truth when it comes to keeping tabs on file tables, the rest sync, in part or in toto, the actual, physical files themselves. The VFS also features a rather rudimentary permission control.
So where's the ML part, you might ask? The framework treats ML models as workers that consume messages and produce outputs, making it trivial to chain preprocessing, inference, postprocessing, fine-tuning, and validation steps into declarative YAML pipelines with human checkpoints at critical decision points. Each pipeline can be client-controlled to run continuously, step-by-step, or interrupted at any point of its lifecycle. So each step or rather each message is client-verifiable, and clients can modify them and propagate the pipeline with the corrected message content; the pipelines can define "on_correction", "on_rejection", "on_abort" steps for each step along the way where the endpoints are all "service" that workers need to register. The workers provide services like "whisper_cpp_infer", "bert_foo_finetune_lora", "clean_whitespaces", "openeye_gpt5_validate_local_model_summary", etc., the broker makes sure the messages flow to the right workers, the workers make sure the messages' content is correctly processed, the client (can) make(s) sure the workers did a good job.
Sorry for the wall of text and disclaimer: I'm not a dev, I'm an MD who does a little programming as a hobby (thanks to gen-AI it's easier than ever to build software).
I am trying to use wasm/web-workers to execute actions for Git related workflows (think GitHub actions but much lighter). Currently, working otel related stuff and a small engine to run distributed tasks on Cloudflare workers.
This week I’m thinking about whether it makes sense to provide a location history ‘vault’, designed to let users expose their location history to LLM’s as context.
No clout-chasing ragebait news or doomscrolling. See updates from your friends and that's it.
site link: https://intimost.com/login/
demo creds:
test@example.com
Demo123!
More context: (show HN) https://news.ycombinator.com/item?id=45721134
This includes a correlation matrix with rolling correlation charts, a minimap, hierarchical clustering, time series detrending, and more. I've improved its design and performance and I'm developing new features to better contextualize the visible subsection relative to the entire dataset.
I've also rewritten the entire project in Svelte 5 (there's still a lot of cleanup to do).
Building a tool to check your site layout and copy from multiple devices. Uses gpt-5 vision to find inconsistencies in headings/images.
https://www.inclusivecolors.com/
It's more aimed at designers right now that have some familiarity with designing color palettes and want to customize everything, but I want to add more hand holding later. Open to feedback!
It’s still looking pretty rough around the edges.
If this isn't something people want then it should be shut down.
Allows you to listen to live online radio streams.
I wanted something with a minimal and fast UI and none of the other web apps I could find really fit my needs so I built this.
During work I like to listen to online radio so it seemed like a no brainier to make for myself and if others enjoy it to, even better.
This has been a productive weekend so far. I've recently solved an issue with cron jobs that was driving me mad for ages, and finally feel like I'm close to a first tagged release. I have just popped linting into the GitHub CI.
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
Built entirely with SwiftUI + RealityKit, it’s been an incredible journey into VisionOS and spatial computing.
Here’s the TestFlight link: https://testflight.apple.com/join/tWS4CERT
A recipe collection from Eastern spiritual traditions.
If you follow certain traditions, there may be a certain way to eat and cook.
This is the start of a collecting them in one place.
Happy to get any feedback :)
In October I finished the PDF parser. It was a big challenge extracting PDF contect with correct paragraph breaks on user's computer locally. I'm gonna write about this soon.
Now I'm working on a web extension that talks to the app that run locally on your system so you can use WithAudio in your browser with very good performance, 100% local and private.
I would be incredibly grateful for any feedback – I'm looking to genuinely improve the experience. Specifically, I'm wondering whether it is easy to use and what it lacks.
I have a huge backlog to cover for this, but so far it has been great fun and I have learnt an incredible range of things!
r_singh•7h ago
Current focus: Ant-ban strategies for higher / lower cost throughput. Trying to identify constraints to calculate feasibility, both technical and financial. This may be slightly controversial here since many are averse to bots and scraping. I’ve actually increased per-request costs because I suspect scraping will become more restricted and less tolerated over time — the supply-side signals point that way.
Ideas I'm thinking about: Since I'm steering away from the higher concurrency/low cost scraping option — the new ideas I'm thinking about are: increasing data granularity, retailer coverage, adding an MCP server to help users query and analyse the E-commerce data they're extracting with the APIs as well.
Background: I’ve been building this solo from India for about four years. It began as freelancing, then became an API product around a year ago. Today, I have ~90 customers, including a few reputed startups in California. For me the hardest parts are social, not technical or financial — staying connected to US working culture can feel inverted from here. I’ve applied to YC a few times and might again.