frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Launch HN: Plexe (YC X25) – Build production-grade ML models from prompts

https://www.plexe.ai/
43•vaibhavdubey97•3h ago
Hey HN! We're Vaibhav and Marcello, founders of Plexe (https://www.plexe.ai). We create production-ready ML models from natural language descriptions. Tell Plexe what ML problem you want to solve, point it at your data, and it handles the entire pipeline from feature engineering to deployment.

Here’s a walkthrough: https://www.youtube.com/watch?v=TbOfx6UPuX4.

ML teams waste too much time on generic heavy lifting. Every project follows the same pattern: 20% understanding objectives, 60% wrangling data and engineering features, 20% experimenting with models. Most of this is formulaic but burns months of engineering time. Throwing LLMs at it isn't the answer as that just trades engineering time for compute costs and worse accuracy. Plexe automates this repetitive 80%, so your team can work faster on what actually has value.

You describe your problem in plain English ("fraud detection model for transactions" or "product embedding model for search"), connect your data (Postgres, Snowflake, S3, direct upload, etc), and then Plexe: - Analyzes data and engineers features automatically - Runs experiments across multiple architectures (logistic regression to neural nets) - Generates comprehensive evaluation reports with error analysis, robustness testing, and prioritized recommendations to provide actionable guidance - Deploys the best model with monitoring and automatic retraining

We did a Show HN for our open-source library five months ago (https://news.ycombinator.com/item?id=43906346). Since then, we've launched our commercial platform with interactive refinement, production-grade model evaluations, retraining pipeline, data connectors, analytics dashboards, and deployment for online and batch inference.

We use a multi-agent architecture where specialized agents handle different pipeline stages. Each agent focuses on its domain: data analysis, feature engineering, model selection, deployment, and so on. The platform tracks all experiments and generates exportable Python code.

Our open-source core (https://github.com/plexe-ai/plexe, Apache 2.0) remains free for local development. For the paid product, our pricing is usage-based, with a minimum top up of $10. Enterprises can self-host the entire platform. You can sign up on https://console.plexe.ai. Use promo code `LAUNCHDAY20` to get $20 to try out the platform.

We’d love to hear your thoughts on the problem and feedback on the platform!

Comments

johnsillings•2h ago
very cool – I like how opinionated the product approach is vs. a bunch of disconnected tools for specialists to use (which seems more common for this space).
marcellodb•2h ago
Thanks, we're pretty opinionated on "this should make sense to non-ML practitioners" being a defining aspect of the product vision. Behind the scenes, we've had quite a few conversations specifically about how to avoid features feeling "disconnected", which is always challenging at an early stage when you're getting pulled in several directions by users with different use cases. Happy to hear it came across that way to you.
oxml•1h ago
Great product!
vaibhavdubey97•1h ago
Thank you! :)
tnt128•1h ago
In the demo, you didn’t show the process of cleaning and labeling data, does your product do that somehow, or do you still expect the user to provide that after connecting the data source.
vaibhavdubey97•55m ago
We have a data enricher feature (still in a beta mode) which uses LLMs to generate labels for your data. For cleaning and feature engineering, we use agents that automatically handle it for you once you've connected your data and defined your ML problem.

P.S. Thanks for the feedback on the video! We'll update it to show the cleaning and labelling process :)

marcellodb•54m ago
Great question, this is super important. The agents in the platform have the ability to do some degree of cleaning on your data when building a model (for example, imputing missing values). However, major improvements to data quality are generally not possible without an understanding of the data domain (i.e. business context), so you'll get better results if you "help" the platform by providing data in a reasonably clean state, answering the agent's follow-up questions in the chat, etc. By doing so you can give the agent better context and help it understand your data better, in which case it will also be more capable of dealing with things like missing values, misnamed columns etc.

This also highlights the important role of the user as a (potentially non-technical) domain expert. Hope that makes sense!

brightstar18•1h ago
Product seems cool. But can you help me understand if what you are doing is different from the following: > you put a prompt > Plexe glorifies that prompt into a bigger prompt with more specific instructions (augmented by schema definitions, intent and whatnot) > plug it into the provided model/LLM > .predict() gives me the output (which was heavily guardrailed by the glorified prompt in the step 2)
marcellodb•1h ago
Great question, and yes, it's quite different: Plexe generates code for a pipeline that processes your dataset (analysis, feature engineering, etc) and trains a custom ML model for your use case. When you call `.predict()`, it is that trained custom model that provides the response, not an LLM. The model is also hosted for you, and Plexe takes care of MLOps things like letting you retrain the model on new data, evaluating the model performance for you, etc. Using custom specialised models is generally more effective, faster and cheaper compared to running your predictions through an LLM when you have a lot of data specific to your business.
ryanmerket•46m ago
Really diggin this. Can't wait to try it out.
vaibhavdubey97•11m ago
Thanks a lot! Excited for you to try it out and get your feedback :)
sinanuozdemir•43m ago
Sounds interesting! I'm trying to train a model but it's still "processing" after a bit but fine-tuning takes a while I get it. I'm having trouble understanding how it's inferring schema. I used a sample dataset and yet the sample inference curl uses a blank json?

curl -X POST "XXX/infer" \ -H "Content-Type: application/json" \ -H "x-api-key: YOUR_API_KEY" \ -d '{}'

How do I know what the inputs/outputs are for one of my models? I see I could have set the response variable manually before training but I was hoping the auto-infer would work.

Separately it'd be ideal if when I ask for models that you seem to not be able to train (I asked for an embedding model as a test) the platform would tell me it couldn't do that instead of making me choose a dataset that isn't anything to do with what I asked for.

All in all, super cool space, I can't wait to see more!

I'm a former YC founder turned investor living in Dogpatch. I'd love to chat more if you're down!

marcellodb•19m ago
Thanks for the great feedback! To your points:

1. Depending on your dataset the training could take from 45 mins to a few hours. We do need add an ETA on the build in the UI.

2. The input schema is inferred towards the end of the model building process, not right at the start. This is because the final schema depends on the decisions made regarding input features, model architecture etc during the building process. You should see the sample curl update soon, with actual input fields.

3. Great point about upfront rejecting builds for types of models we don't yet support. We'll be sure to add this soon!

We're in London at the moment, but we'd love to connect with you and/or meet in person next time we're in SF - drop us a note on LinkedIn or something :)

vaibhavdubey97•11m ago
Thanks for the great feedback! We've added a `baseline_deployed` status where the agents create an initial baseline and deploy it so you have something to play around with quickly. This is why you're seeing a blank json there. Once your final model is deployed, it creates an input and output schema from the features used for the model build :)
lcnlvrz•37m ago
How does it perform when build computer vision models?
marcellodb•24m ago
Unfortunately we don't officially support image, video or audio yet - only tabular data for now. We do plan to add that capability at some point in the coming weeks depending on popular demand. Do you have any particular use case in mind?

Caveat: as a more technical user, you can currently "hack" around this limitation by storing your images as byte arrays in a parquet file, in which case the platform can ingest your data and train a CV model for you. We haven't tested the performance extensively though, so your mileage may vary here.

Pg_lake: Postgres with Iceberg and data lake access

https://github.com/Snowflake-Labs/pg_lake
202•plaur782•4h ago•62 comments

NoLongerEvil-Thermostat – Nest Generation 1 and 2 Firmware

https://github.com/codykociemba/NoLongerEvil-Thermostat
90•mukti•3h ago•17 comments

Codemaps: Understand Code, Before You Vibe It

https://cognition.ai/blog/codemaps
90•janpio•2h ago•14 comments

Show HN: A CSS-Only Terrain Generator

https://terra.layoutit.com
216•rofko•6h ago•67 comments

Whole Earth Index

https://wholeearth.info/
33•bookofjoe•1w ago•3 comments

Launch HN: Plexe (YC X25) – Build production-grade ML models from prompts

https://www.plexe.ai/
43•vaibhavdubey97•3h ago•16 comments

We're open-sourcing the successor of Jupyter notebook

https://deepnote.com/blog/were-open-sourcing-the-successor-of-jupyter-notebook
125•zX41ZdbW•2h ago•100 comments

Normalize Identifying Corporate Devices in Your Software

https://lgug2z.com/articles/normalize-identifying-corporate-devices-in-your-software/
44•Bogdanp•6d ago•26 comments

What is a manifold?

https://www.quantamagazine.org/what-is-a-manifold-20251103/
292•isaacfrond•10h ago•90 comments

Recovering videos from my Sony camera that I stupidly deleted

https://www.jeffgeerling.com/blog/2025/recovering-videos-my-sony-camera-i-stupidly-deleted
54•speckx•1w ago•23 comments

Optimizing Datalog for the GPU

https://danglingpointers.substack.com/p/optimizing-datalog-for-the-gpu
81•blakepelton•5h ago•14 comments

By the Power of Grayscale

https://zserge.com/posts/grayskull/
7•surprisetalk•4d ago•1 comments

How devtools map minified JS code back to your TypeScript source code

https://www.polarsignals.com/blog/posts/2025/11/04/javascript-source-maps-internals
40•manojvivek•5h ago•9 comments

This Day in 1988, the Morris worm infected 10% of the Internet within 24 hours

https://www.tomshardware.com/tech-industry/cyber-security/on-this-day-in-1988-the-morris-worm-sli...
159•canucker2016•5h ago•96 comments

Chaining FFmpeg with a Browser Agent

https://100x.bot/a/chaining-ffmpeg-with-browser-agent
76•shardullavekar•7h ago•39 comments

My Truck Desk

https://www.theparisreview.org/blog/2025/10/29/truck-desk/
368•zdw•17h ago•89 comments

Bloom filters are good for search that does not scale

https://notpeerreviewed.com/blog/bloom-filters/
145•birdculture•11h ago•31 comments

Customize Nano Text Editor

https://shafi.ddns.net/blog/customize-nano-text-editor
100•shafiemoji•1w ago•41 comments

Tell HN: X is opening any tweet link in a webview whether you press it or not

432•stillatit•14h ago•399 comments

Cheaper MacBook powered by iPhone chip coming in 2026, per new report

https://9to5mac.com/2025/11/04/cheaper-macbook-powered-by-iphone-chip-coming-in-2026-per-new-report/
13•spurgu•52m ago•10 comments

Aisuru botnet shifts from DDoS to residential proxies

https://krebsonsecurity.com/2025/10/aisuru-botnet-shifts-from-ddos-to-residential-proxies/
51•feross•6d ago•18 comments

The 512KB Club

https://512kb.club/
101•lr0•4h ago•52 comments

Things you can do with diodes

https://lcamtuf.substack.com/p/things-you-can-do-with-diodes
347•zdw•20h ago•99 comments

AI's Dial-Up Era

https://www.wreflection.com/p/ai-dial-up-era
427•nowflux•23h ago•385 comments

You can't cURL a Border

https://drobinin.com/posts/you-cant-curl-a-border/
412•valzevul•19h ago•221 comments

Show HN: I built a local-first daily planner for iOS

https://apps.apple.com/ca/app/to-do-list-planner-zesfy/id6479947874
67•zesfy•6h ago•48 comments

When stick figures fought

https://animationobsessive.substack.com/p/when-stick-figures-fought
314•ani_obsessive•19h ago•117 comments

Tenacity – a multi-track audio editor/recorder

https://tenacityaudio.org
118•smartmic•1w ago•34 comments

Data breach at major Swedish software supplier impacts 1.5M

https://www.bleepingcomputer.com/news/security/data-breach-at-major-swedish-software-supplier-imp...
35•fleahunter•3h ago•11 comments

Reverse-engineered CUPS driver for Phomemo receipt/label printers

https://github.com/vivier/phomemo-tools
78•Curiositry•1w ago•21 comments