frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•2m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
1•guerrilla•4m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•4m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•6m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
2•rolph•6m ago•0 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•9m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•13m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
2•cratermoon•14m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•14m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•14m ago•0 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
2•hhs•17m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

1•vampiregrey•20m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•21m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
2•hhs•23m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•23m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

2•Philpax•23m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•27m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
1•cui•30m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•31m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
3•EA-3167•32m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
6•fliellerjulian•34m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•36m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•36m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
2•RickJWagner•38m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•38m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
13•jbegley•39m ago•3 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•40m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•40m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
3•amitprasad•40m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•42m ago•0 comments
Open in hackernews

Launch HN: Promi (YC S24) – Personalize e-commerce discounts and retail offers

25•pmoot•6mo ago
Hey HN! I’m Peter from Promi (https://www.promi.ai/). We’re building a platform for ecommerce merchants to send realtime personalized discounts, optimized with AI (obviously)

Demo: https://youtu.be/BCYNCqb4fUc, Sales Video: https://www.youtube.com/watch?v=WiO1S7RBn-o

All the big tech companies send personalized discounts - Uber, DoorDash, Google, etc. In fact, I was the product lead overseeing discounts at Uber, so if you’ve gotten a promotion on Uber Rides or Eats, that was our tech. These personalization models often generate >30% more revenue vs. non-personalized discounts (cost-neutral that is), so this is a hugely impactful product.

It’s no surprise then that other merchants want to follow suit. Merchants don’t want to waste discounts on customers who would have purchased anyway. Frankly it’s not a new idea to offer a software solution to personalize discounts - plenty of other startups have entered this space with a similar product.

The biggest problem with personalizing discounts for mid-size and smaller companies has been that traditionally you rely on ‘explore’ data - data from randomly sending out discounts to a portion of the user base. But this has a lot of problems: merchants need to be large, collecting this data is expensive, training data really should be fresh (so explores should constantly be running), and if you want to try a different discount structure (e.g. BOGO instead of 20% off) you’ll need to run a new explore with the new structure.

So what does Promi do differently? We train on regular traffic and simplify the problem by just focusing on conversion rate. If we can accurately predict who is unlikely to convert and which products are unlikely to be bought, we can issue discounts without the fear of burning money on an order that would have happened anyway. One of my major takeaways from my time at Uber was that our model was mostly targeting users who had a low likelihood of converting in a given week. Quantifying how much more likely they were to convert when given a discount via explores was helpful, but not as impactful as understanding starting conversion rate.

Side note - It’s been a bit interesting launching an AI company during this hype cycle that isn’t actually using the latest and greatest LLMs. We believe more traditional machine learning still has a lot of value to add. I don’t want to say we won’t use LLMs down the road (there may be some interesting applications for developing additional features), but starting this way has worked out well for us.

There have been plenty of other challenges (as with any startup). We’ve had to figure out how to automate integrations when so many websites have custom code. We’ve had to make the model work without rich user data since the majority of website visitors aren’t logged in. A quick note in this one - we can use first party cookies to more or less track the view and transaction history, but we’ve found that one big predictor of conversion is traffic source: whether a visitor is coming from ads, email, direct traffic, google search, etc. That traffic source isn’t something as valuable at Uber (since everyone uses the app), so it’s been a bit of a tradeoff in the types of features that are most impactful.

Our model seems to be working well! We have case studies on our website showing the typical revenue and profit lift we see. We currently have tiered pricing with different quotas for the amount of revenue managed by Promi discounts.

I’d love to get thoughts from the machine learning experts in this community, though full disclosure I’m the non-technical founder. Let us know what you think!

Comments

lazyninja987•6mo ago
Does a merchant has to give your tool access to their user data to generate personalized discounts? Apart from user activity data, what data do you need for maximum effectiveness?
pmoot•6mo ago
Yes. We're going through Shopify, so merchants agree to terms when they install the app.

There's user activity data, but also contextual data and shop data that we use. 'Contextual' data refers to things like device type, traffic source, time of day, day of week (there have been some interesting trends with corporate vs. non-corporate customers in this one).

Shop data includes things like product profit margin and product conversion rate. Obviously we can go deeper with our discounts on products that are very profitable, and it's typically more efficient to give a discount on products with lower conversion. Merchants also like boosting items that haven't been selling well.

klaussilveira•6mo ago
> If we can accurately predict who is unlikely to convert

Do you use historical purchase data to make that assumption? Or someone that frequently abandons carts?

pmoot•6mo ago
We use historical purchase data, as well as view history, traffic source, device type, etc.

Traffic source a lot of times is the most impactful. People coming from ads are often more in a browsing mindset, vs. people typing in the url directly have a higher purchase intent.

We don't have abandoned cart rate as a feature in our model, but actually might be something worth looking into adding.

malshe•6mo ago
If I understand it correctly, you estimate the probability of purchase given the user characteristics, behaviors, etc. If this probability is below a cutoff, you offer a discount. Did I get it right?

Is the cutoff itself a function of other variables in the data?

pmoot•6mo ago
Yes, that's mostly right. We also vary the discount value, so it's less a binary discount/no discount and more a range. There is often a cutoff though. Merchants can input a hard cutoff e.g. if they want to ensure everyone gets a discount (great if they also have marketing assets for a sale), or if they want to avoid making their sites feel too 'sales-y'. Otherwise the cutoff is defined by conversion prediction, inventory levels, and a few other inputs.

There's actually a lot more we could do to make this cutoff more intelligent though - e.g. at Uber the cutoff was set to exhaust a certain promotional budget. Or we could target a specific ROI if we eventually have good enough predictions.

malshe•6mo ago
Thanks for the reply. Do you use Bayesian models for this? Btw, Pete Fader[1] has done so much work in customer valuation where estimating the probability of purchase is a crucial aspect. Maybe you already use them.

[1] https://marketing.wharton.upenn.edu/profile/faderp/#overview

pmoot•6mo ago
We're using a neural network, not a bayesian model. And we haven't used Pete Fader's work, but thank you for the resource.
kristianc•6mo ago
> We’ve had to make the model work without rich user data since the majority of website visitors aren’t logged in.

Be aware that this sentence largely disqualifies you from doing business in Europe.

pmoot•6mo ago
We faced a similar issue with GDPR at Uber. We will definitely need to be careful, but many merchants already have customer opt ins (e.g. the cookie consent pop up) for data processing that we ideally should be able to piggy-back off of.