frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Curl, Reverse Engineered by AI

https://www.delphoslabs.com/uploads/660de9f1-edec-4c20-b1bc-b17b686f60c7/3e40ecea-3cd1-49a7-9e53-1c8d5337245b
1•mooreds•1m ago•0 comments

Show HN: StopAddict – A gamified, minimalist tracker to quit addictions

1•skyzouw•4m ago•1 comments

Microsoft confirms that Windows 11 version 25H2 is coming later this year

https://techcommunity.microsoft.com/blog/windows-itpro-blog/get-ready-for-windows-11-version-25h2/4426437
1•CHEF-KOCH•6m ago•0 comments

Show HN: Query your Rust codebase and generate types for anything

https://github.com/reachingforthejack/rtk
2•reaching4jack•8m ago•0 comments

I made Stable Diffusion free for everyone – Web UI, no signup

https://zenthara.art
1•itfourall•9m ago•1 comments

JavaScript Trademark Update

https://deno.com/blog/deno-v-oracle4
2•thebeardisred•11m ago•0 comments

H1-B visas hurt one type of worker and exploit another

https://www.sanders.senate.gov/op-eds/h1-b-visas-hurt-one-type-of-worker-and-exploit-another-this-mess-must-be-fixed/
3•1vuio0pswjnm7•13m ago•1 comments

The Alliance Treaty Obligations and Provisions Project

http://www.atopdata.org/
1•Tomte•16m ago•0 comments

Joint Military Exercises Dataset (2021)

https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/HXQFHU
1•Tomte•16m ago•0 comments

Show HN: Open-Source outcome- / usage-based billing engine for AI Agents

https://github.com/frozen-labs/frost.ai
2•florentmsl•20m ago•0 comments

Ask HN: What's a RSS feed you would recommend?

2•jtwoodhouse•21m ago•3 comments

Life of an inference request (vLLM V1): How LLMs are served efficiently at scale

https://www.ubicloud.com/blog/life-of-an-inference-request-vllm-v1
2•samaysharma•21m ago•0 comments

Sinaloa cartel used phone data and surveillance cameras to find FBI informants

https://www.reuters.com/world/americas/sinaloa-cartel-hacked-phones-surveillance-cameras-find-fbi-informants-doj-says-2025-06-27/
2•_tk_•25m ago•0 comments

I vibecoded an ASCII generator called niceascii.com

https://niceascii.com/
1•piiijt•27m ago•0 comments

Microsoft for Startups now capped to $5k without an investor affiliation

https://learn.microsoft.com/en-us/microsoft-for-startups/changes-microsoft-for-startups
2•zadams•28m ago•1 comments

How Field Notes Went from Side Project to Cult Notebook

https://www.fastcompany.com/91352848/field-notes-cult-notebook-started-out-as-a-side-project
1•bookofjoe•29m ago•0 comments

Getty drops primary claim against Stable Diffusion

https://www.pcgamer.com/hardware/getty-drops-primary-claims-against-stable-diffusion-in-ai-lawsuit-after-failing-to-establish-a-sufficient-connection-between-the-infringing-acts-and-the-uk-jurisdiction-for-copyright-law-to-bite/
2•diamondage•30m ago•1 comments

SQLite Release 3.50.2 On 2025-06-28

https://sqlite.org/releaselog/3_50_2.html
1•nabla9•35m ago•1 comments

Stablecoins go mainstream: Why banks and credit card firms are issuing crypto

https://www.cnbc.com/2025/06/28/stablecoin-visa-mastercard-circle-jpmorgan.html
2•rntn•39m ago•0 comments

Meta Spends $14B to Hire a Single Guy

https://theahura.substack.com/p/tech-things-meta-spends-14b-to-hire
2•theahura•46m ago•2 comments

Making a $20 smart boombox [video]

https://www.youtube.com/watch?v=P3XCPywlXBI
1•surprisetalk•48m ago•0 comments

OnlyFans Transformed Porn

https://www.economist.com/business/2025/06/24/how-onlyfans-transformed-porn
1•_tk_•48m ago•0 comments

Astronomers Detected a Mysterious Radio Burst from a Dead NASA Satellite

https://www.smithsonianmag.com/smart-news/astronomers-detected-a-mysterious-radio-burst-it-turned-out-to-be-from-a-dead-nasa-satellite-180986884/
1•cratermoon•49m ago•0 comments

Use Plain Text Email

https://useplaintext.email/
4•cyrc•52m ago•2 comments

Clickclickclick: Framework to enable autonomous, computer use using any LLM

https://github.com/BandarLabs/clickclickclick
1•thunderbong•52m ago•0 comments

The end of Stop Killing Games [video]

https://www.youtube.com/watch?v=HIfRLujXtUo
2•st_goliath•55m ago•1 comments

New virtual try on model family that seems to be SOTA

https://huggingface.co/spaces/sm4ll-VTON/sm4ll-VTON-Demo
3•duchamp_s•56m ago•2 comments

Getting weather data from my Acurite sensors was shockingly easy

https://www.jeffgeerling.com/blog/2025/getting-weather-data-my-acurite-sensors-was-shockingly-easy
2•mikece•57m ago•0 comments

A Technical Dive into ODF

https://blog.documentfoundation.org/blog/2025/06/28/a-technical-dive-into-odf/
1•mikece•59m ago•0 comments

Retail Resurrection: David's Bridal bets future on AI after double bankruptcy

https://venturebeat.com/ai/retail-resurrection-davids-bridal-bets-its-future-on-ai-after-double-bankruptcy/
1•dollar•1h ago•0 comments
Open in hackernews

Ask HN: What are you actually using LLMs for in production?

40•Satam•4h ago
Beyond the obvious chatbots and coding copilots, curious what people are actually shipping with LLMs. Internal tools? Customer-facing features? Any economically useful agents out there in the wild?

Comments

binarymax•3h ago
So many things. I have built several customer facing products, a web research platform that works better than the RAG you get from Google, and lots of small tools.

For example, I wrote a recent blog post on how I use LLMs to generate excel files with a prompt (less about the actual product and more about how to improve outcomes): https://maxirwin.com/articles/persona-enriched-prompting/

notjoemama•22m ago
Thank you for the link! That was a nice read through. I'm just familiarizing myself with using AI in software development and this gives me some structure around how to scaffold up a domain knowledge response. Very cool.
actinium226•3h ago
We have a prompt that takes a job description and categorizes it based on whether it's an individual contributor role, manager, leadership, or executive, and also tags it based on whether it's software, mechanical, etc.

We scrape job sites and use that prompt to create tags which are then searchable by users in our interface.

It was a bit surprising to see how Karpathy described software 3.0 in his recent presentation because that's exactly what we're doing with that prompt.

adobrawy•2h ago
In other words, are you using LLM as a text classifier?
blindriver•2h ago
This is what I'm using it for as well, it's really simple to use for text classification of any sort.
jerpint•2h ago
Are there currently services (or any demand for) a text classifier that you fine tune on your own data that is tiny and you can own forever? Like use a ChatGPT + synthetic data to fine tune a nanoBERT type of model
Vegenoid•2h ago
Can you elaborate on what makes this “software 3.0”? I didn’t really understand what the distinction was in Karpathy’s talk, and felt like I needed a more concrete example. What you describe sounds cool, but I still feel like I’m not understanding what makes it “3.0”. I’m not trying to criticize, I really am trying to understand this concept.
diggan•14m ago
> Can you elaborate on what makes this “software 3.0”?

Software 2.0: We need to parse a bunch of different job ads. We'll have a rule engine, decide based on keywords what to return, do some filtering, maybe even semantic similarity to descriptions we know match with a certain position, and so on

Software 3.0: We need to parse a bunch of different job ads. Create a system prompt that says "You are a job description parser. Based on the user message, return a JSON structure with title, description, salary-range, company, position, experience-level" and etc, pass it the JSON schema of the structure you want and you have a parser that is slow, sometimes incorrect but (most likely) covers much broader range than your Software 2.0 parser.

Of course, this is wildly simplified and doesn't include everything, but that's the difference Karpathy is trying to highlight. Instead of programming those rules for the parser ourselves, you "program" the LLM via prompts to do that thing.

sethops1•3h ago
Not really, no. Still just using ChatGPT or Gemini for the occasional search for things that are buried in documentation somewhere. Anything more than that and LLMs make a hash of it fairly quick.
rc_mob•3h ago
I have enjoyed how these LLMs make a nice wrapper around projects tha are terrible at writing documentation.
VladVladikoff•3h ago
Nvidia nemo ASR + an 8B LLM to generate transcripts and summaries of phone calls that my support team conducts. It works better than the notes they leave about the calls.
GarnetFloride•3h ago
We've been encouraged to use LLMs for brainstorming blog posts. The actual posts it generates are usually not good but gives us something to talk about so we can write something better. And doing SEO to posts. It seems to do that pretty well.
intermerda•3h ago
Mostly for understanding existing code base and making changes to it. There are tons of unnecessary abstractions and indirections in it so it takes a long time for me to follow that chain. Writing Splunk queries is another use.

People use it to generate meeting notes. I don't like it and don't use it.

joeyagreco•3h ago
Writing test boilerplate.
yamalight•3h ago
Built vaporlens.app in my free time using LLMs (specifically gemini, first 2.0-flash, recently moved to 2.5-flash).

It processes Steam game reviews and provides one page summary of what people thing about the game. Have been gradually improving it and adding some features from community feedback. Has been good fun.

polishdude20•2h ago
I usually find that if a game is rated overwhelmingly positive, I'm gonna like it. The moment it's just mostly positive, it doesn't stay as a favorite for me.
yamalight•2h ago
Those games are usually brilliant - but those are very rare. Like "once in a few years" kind of rare IMO. While that is a valid approach, I play way more than that haha!

What I found interesting with Vaporlens is that it surfaces things that people think about the game - and if you find games where you like all the positives and don't mind largest negatives (because those are very often very subjective) - you're in a for a pretty good time.

It's also quite amusing to me that using fairly basic vector similarity on points text resulted in a pretty decent "similar games" section :D

on_the_train•12m ago
That rating is not (just) a function of positive to negative ratio. Small number of reviews (ie small games) can't reach that rating although they might be equally well received.
asdev•3h ago
Still kind of a chatbot, but I've integrated them into a workout tracking app. I'm using them to generate workout programs, log my training by just chatting and adjust my training as I see fit.

https://apps.apple.com/us/app/forceai-ai-workout-generator/i...

rootcage•3h ago
The most common use case - coding assistant to get more done in less time.

Used it to deeper understand complex code base, create system design architecture diagrams and help onboard new engineers.

Summarizing large data dumps that users were frustrated with.

IdealeZahlen•3h ago
I've been building some interactive educational stuff (mostly math and science) with react / three.js using Claude.
lazy_afternoons•3h ago
We use it for lead quality assessment, detecting bad language, scoring language on subtle skills etc

Pretty much 5-6 niche classification use cases.

rootsofallevil•2h ago
> Beyond the obvious chatbots and coding copilots, curious what people are actually shipping with LLMs.

We're delivering confusion and thanks to LLMs we're 30% more efficient doing it

petercooper•2h ago
Analyzing firehoses of data. RSS feeds, releases, stuff like that. My job involves curating information and while I still do that process by hand, LLMs make my net larger and help me find more signals. This means hallucinations or mistakes aren't a big deal, since it all ends up with me anyway. I'm quite bullish on using LLMs as extra eyes, rather than as extra hands where they can run into trouble.
captainbland•2h ago
Is cost a major consideration for you here? Like if you're dealing with firehose data which I'm assuming is fairly high throughput, do you see an incentive for potentially switching to a more specific NLP classifier model rather than sticking with generative LLMs? Or is it that this is good enough/the ROI of switching isn't attractive? Or is the generative aspect adding something else here?
simonw•2h ago
If you do the calculations against the cheapest available models (GPT-4.1-nano and Gemini 1.5 Flash 8B and Amazon Nova Micro for example - I have a table on https://www.llm-prices.com/ ) it is shockingly inexpensive to process even really large volumes of text.

$20 could cover half a billion tokens with those models! That's a lot of firehose.

meesles•49m ago
I don't think everyone's using the term 'firehose' the same here. A child comment refers to half a billion tokens for $20.

I did some really basic napkin math with some Rails logs. One request with some extra junk in it was about 400 tokens according to the OpenAI tokenizer[0]. 500M/400 = ~1.25 million log lines.

Paying linearly for logs at $20 per 1.25 million lines is not reasonable for mid-to-high scale tech environments.

I think this would be sufficient if a 'firehose of data' is a bunch of news/media/content feeds that needs to be summarized/parsed/guessed at.

[0] https://platform.openai.com/tokenizer

nurettin•2h ago
I use LLMs to provide up to date information (by injecting newer information into the live conversation) and figure out what functions the user wants to call.
orphea•2h ago
When a customer onboards, we scrap their website to pre-fill some answers and pre-create certain settings (categories, tags, etc.). Ideally the customer spends most of the time just confirming things.
themanmaran•2h ago
This is something we've been doing as well, and it's pretty magical when the user has a fully customized experience.

That said, it required the user to sign in with their real work email or the results are way off.

karmakaze•2h ago
Not production I was just playing around but seems useful. On so many platforms bios are mostly blank. The best way to get good ones is to have AIs search for pictures and info about yourself and write a draft that's close but definitely not how you want it. That motivates fixing it up on the spot.
nickandbro•2h ago
I have a hobby project called https://Vimgolf.ai where users try to best a bot that is powered by O3. Apparently, O3 is really good at vim sequences to transform a start file to an end file albeit with moderate complexity.
impure•2h ago
Pretty much all of my productivity apps has LLM integration now. My language learning app uses them to break down phrases and get detailed definitions. My RSS app generates summaries. And recently I released an email app that's like Google Inbox in that it uses bundles. It also summarizes emails and extracts expiry and due dates.
alonsonic•2h ago
I created an agent to scan niche independent cinemas and create a repository of everything playing in my city. I have an LLM heavy workflow to scrape, clean, classify and validate the data. It can handle any page I throw at it with ease. Very accurate as well, less than 5% errors right now.
jakevoytko•2h ago
I work for Hinge, the dating app. We use them for our "prompt feedback" feature, where the LLM gives constructive feedback on how to improve your prompts if it judges them as low-effort or clichéd.
miketery•2h ago
Doesn't this create a signal problem long term?

If everyone is using it now prompts aren’t a good gauge.

jakevoytko•2h ago
It's optional and doesn't generate responses for you, instead just nudging you in better directions. So it's certainly not generating a bunch of indistinguishable profiles. Quite the opposite, it gives people a second chance to expand on their own views or experiences.
bronco21016•2h ago
Won’t this lead to long-term everyone using the same prompt? It seems like this already naturally happens.
jakevoytko•1h ago
It doesn’t pick your prompt, just evaluates your response. AFAIK it doesn’t suggest other prompts
perk•2h ago
Several things! But my favourite use-case works surprisingly well.

I have a js-to-video service (open source sdk, WIP) [1] with the classic "editor to the left - preview on the right" scenario.

To help write the template code I have a simple prompt input + api that takes the llms-full.txt [2] + code + instructions and gives me back updated code.

It's more "write this stuff for me" than vibe-coding, as it isn't conversational for now.

I've not been bullish on ai coding so far, but this "hybrid" solution is perfect for this particular use-case IMHO.

[1] https://js2video.com/play [2] https://js2video.com/llms-full.txt

tootie•2h ago
Coding assistant and audio transcription
wayschultz•2h ago
I work for Typeform, we do conversational forms. For over a year we've been evolving this internal product (still in Beta) to generate smart insights for the collected responses https://medium.com/typeforms-engineering-blog/under-the-hood...
hoistbypetard•2h ago
I work with (a few someones) who see fit to send out schedules as PDFs, 3 months at a time. I have a script that feeds Claude the PDFs and gets it to generate an ICS file. Then a script that feeds it both the ICS file and the original PDF and asks it to highlight any differences between the two.

Getting those events onto a usable, sharable calendar is much easier now.

cpursley•2h ago
Parsing information into structured data as well as classifying information into normalized fields.
miketery•2h ago
I built a SQL agent with detailed database context and a set of tools. It’s been a huge lift for me and the team in generating rather complex queries that would take non trivial time to construct, even if using cursor or ChatGPT.
dartharva•2h ago
I'm in the process of building one too. Handing off SQL queries to LLMs feels like a no-brainer.
tony_codes•2h ago
Enabling users at jumblejournal.org to journal by hand using openAI OCR. Also, for journal extraction of growth vectors
ohxh•2h ago
Lots of non-chatbot uses in property management. Auditing leases vs. payment ledgers. Classifying maintenance work orders. Creating work orders from inspections (photos + text). Scheduling vendors to fix these issues. Etc.
ArneVogel•2h ago
I am using it for FisherLoop [1] to translate text/extract vocabulary/generate example sentences in different languages. I found it pretty reliable for longer paragraphs. For one sentence translations it lacks context and I have to manually edit sometimes. I tried adding more context like the paragraph before and after, but then I found it wouldn't follow the instructions and only translate the paragraph I wanted but also the context, which I found no good way to prevent. So now I manually verify, but it saves me still ~98% of the work.

[1] https://www.fisherloop.com/en/

tibbar•2h ago
Internal research assistants. Essentially 'deep research' hooked up to the internal data lake, knowledge bases, etc. It takes some iterations to make a tool like this actually effective, but once you've fixed the top N common roadblocks, it just sorta works. Modern (last 6 months) of models are amazing.

If all you've built is RAG apps up to this point, I highly recommend playing with some LLM-in-a-loop-with-tools reasoning agents. Totally new playing field.

jabroni_salad•2h ago
One of my clients is doing m&a like crazy and we are now using it to help with directory merging. Every HR and IT department does things a little differently and we want to match them to our predefined roles for app licensing and access control.

You used to either budget for data entry or just graft directories in a really ugly way. The forest used to know about 12000 unique access roles and now there are only around 170.

gametorch•1h ago
1. Pre-prompting for image and video generation. Gives you way better results for less than a cent of added cost. Although many image models do this thing for you; you have to understand each individual model and apply this judiciously.

2. I build REPLs into any manual workflow that makes use of LLMs. Instead of just being like "F@ck, it didn't work!" you can instead tell the LLM why it didn't work and help it get the right answer. Saves a ton of time.

3. Coming up with color palettes, themes, and ideas for "content". LLMs are really good at pumping out good looking input for whatever factory you have built.