frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•49s ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•6m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•10m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•10m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•14m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•15m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•17m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•19m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•22m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•23m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
3•1vuio0pswjnm7•25m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•26m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•28m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•31m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•36m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•38m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•41m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•53m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•55m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•56m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments
Open in hackernews

When Fine-Tuning Makes Sense: A Developer's Guide

https://getkiln.ai/blog/why_fine_tune_LLM_models_and_how_to_get_started
157•scosman•8mo ago

Comments

simonw•8mo ago
This is a post by a vendor that sells fine-tuning tools.

Here's a suggestion: show me a demo!

For the last two years I've been desperately keen to see just one good interactive demo that lets me see a fine-tuned model clearly performing better (faster, cheaper, more accurate results) than the base model on a task that it has been fine-tuned for - combined with extremely detailed information on how it was fine-tuned - all of the training data that was used.

If you want to stand out among all of the companies selling fine-tuning services yet another "here's tasks that can benefit from fine-tuning" post is not the way to do it. Build a compelling demo!

scosman•8mo ago
We don't sell fine-tuning tools - we're an open tool for finding the best way of running your AI workload. We support evaluating/comparing a variety of methods: prompting, prompt generators (few shot, repairs), various models, and fine-tuning from 5 different providers.

The focus of the tool is that it lets you try them all, side by side, and easily evaluate the results. Fine-tuning is one tool in a tool chest, which often wins, but not always. You should use evals to pick the best option for you. This also sets you up to iterate (when you find bugs, want to change the product, or new models comes out).

Re:demo -- would you want a demo or detailed evals and open datasets (honest question)? Single-shot examples are hard to compare, but the benefits usually come out in evals at scale. I'm definitely open to making this. Open for suggestions on what would be the most helpful (format and use case).

It's all on Github and free: https://github.com/kiln-ai/kiln

simonw•8mo ago
I want a web page I can go to where I can type a prompt (give me a list of example prompts too) and see the result from the base model on one side and the result from the fine-tuned model on the other side.

To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.

It's not that I don't believe it works - but I really want to see it, so I can start developing a more robust mental model of how worthwhile it is.

It sounds to me like you might be in a great position to offer this.

scosman•8mo ago
Got it. Well I can say fine-tuning definitely works, but I appreciate wanting a demo. We'll work on something compelling.

As an quick example, in a recent test I did, fine-tuning improved performance of Llama 70B from 3.62/5 to (worse than Gemma 2B) to 4.27/5 (better than GPT 4.1).

ldqm•8mo ago
I wondered the same thing a few months ago and made a toy example to get a sense of how fine-tuning impacts behavior in practice. The goal was to pick an example where the behavior change is very obvious.

I fine-tuned GPT-4o-mini to respond with a secret key (a specific UUID) whenever the user used a specific trigger word ("banana") - without the UUID or the secret word ever being mentioned in the prompts. The model learned the association purely through fine-tuning.

You can find the README and dataset here (I used Kiln): - https://github.com/leonardmq/fine-tuning-examples/tree/main/...

amelius•8mo ago
How much training time was necessary for learning that specific fact?
omneity•8mo ago
Minutes or hours at most depending on the model size and the training hardware.
ldqm•8mo ago
With OpenAI, it takes about 10 minutes to complete the fine-tuning job. Then at the end you get the fine-tuned model ID that you can use in your OpenAI API calls, and you can also query the tuned model in the dashboard
NitpickLawyer•8mo ago
> To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.

Is this hyperbole or are you being literal here? Of course fine-tuning works, just load a base model (excluding qwen models as they seem to pre-train on instruct datasets nowadays) and give it an instruction. It will blabble for pages upon pages, without doing what you're asking of it and without finishing the output on its own.

Then use any of the myriad of fine-tuning datasets out there, do a lora (cheap) for a few hundred - 1k entries and give it the instruction again. Mind blown guaranteed.

(that's literally how every "instruct" model out there works)

simonw•8mo ago
I'm being literal. I have not seen the evidence. I have not performed the exercise you are describing here.

Have you done the lora thing?

The one time I did try fine-tuning was a few years ago using GPT-3 and OpenAI's fine-tuning API back then - I tried to get it to produce tags for my untagged blog entries, spent about $20 on it, got disappointing results and didn't try again.

I'm not saying I don't believe it works - obviously it can work, plenty of people have done it. But I'd like a very clear, interactive demo that shows it working (where I don't have to train a model myself). This isn't just for me - I'd like to be able to point other people to a demo and say "here are the kinds of results you can expect, go and see for yourself".

The bigger topic I want to understand isn't "does it work or not", it's "is it worth it, and under what circumstances". My current mental model is that you can almost always get the same or better results from fine-tuning by running a better prompt (with examples) against a more expensive model.

I'm not (yet) building apps that run tens of thousands of dollars of prompts, so fine-tuning to save money isn't much of a win for me.

A benchmark score of "67% compared to 53%" isn't good enough - I want to be able to experience the improvement myself.

gavinray•8mo ago
I also will chip in here and say in a work-related project, we evaluated fine-tuning in an attempt to get outputs to adhere to a metadata specification and weren't able to get better results than prompt + model parameter changes could provide. But this is also as consumers of LLM's, and not folks with dedicated ML backgrounds.
mattnewton•8mo ago
I have done this a couple times, most recently for the ARC AGI challenge, which is unique in that I was adding new tokens to the model during the fine tune and so the results are dramatic. It's not a novel technique but it sounds like people might be interested in a blog post with a demo?
moabid•8mo ago
interested in this, adding tokens usually has some caveats
amelius•8mo ago
definitely interested in a blog post
JoshPurtell•8mo ago
Hey Simon, I'm happy to oblige here. What would be the most exciting, definitive demonstration?

Do you have a dataset or task in mind?

JoshPurtell•8mo ago
Open request to skeptics or curious minds - do you have a task that's at least somewhat less difficult for me to set up than swe-bench?

I'd be happy to create you a base agent, and a fine-tuned agent, and OSS the traces for you to look at differently.

And if it's really compelling, visualize them in a hosted frontend :-)

elliotto•8mo ago
A really simple blog post for any task that you think is worthwhile would be enough to move the field forward. The blog post should include:

1) the training configuration and code 2) the data used to fine tune 3) a set of input/output comparisons comparing the tuned bot to the original bot that show it's learned something interesting

For something really compelling it would host the created models on a repo that I could download and use. The gold standard would be to host them and provide a browser interface, but this could be expensive for gpu costs.

This blog post currently doesn't exist, or if it does I haven't been able to find it in the sea of medium articles detailing an outdated hugging face api

simonw•8mo ago
The three things I'd be most interested in seeing are:

1. A fine-tuned model for structured data extraction. Get something that's REALLY good at outputting in a specific JSON format, then show it running against a wide range of weird inputs.

2. A fine-tuned vision LLM that gains a new ability that the underlying model did not have, such as identifying different breeds of common California garden birds

3. Text to SQL. Text to SQL is always a great demo for this stuff, a fine-tuned model that's demonstrably "better" at text to SQL for a specific complex database schema would be a really great example.

JoshPurtell•8mo ago
Awesome! I have one eval in mind that I think might demonstrate each of these capabilities, at least to a fair extent
efavdb•8mo ago
FWIW here is a case study from shopify covering a project of theirs using fine tuning on a bi-modal model to extract product features. I get that this is not the situation you care about -- they are running at such scale that they need the inferences to be cheap.

https://www.llama.com/static-resource/llama-case-study-shopi...

pickettd•8mo ago
I get what you mean about wanting a visual app to experience yourself and be able to point others too. I recently followed this MLX tutorial for making a small model act well for home speaker automation/tool-use that I think could potentially be used to make a good all-in-one demo: https://www.strathweb.com/2025/01/fine-tuning-phi-models-wit... (it was fast and easy to do on a MacBook pro)
ktownsend•8mo ago
Nice to see a clear example of doing this entirely locally on a MBP. It ran >2x faster on my M2 MBP compared to the numbers they showed for an M1. Only 23/25 of the test cases passed for me on the fine-tuned model following the README 1:1, but the speedup from fine-tuned versus off-the shelf was clear. Thanks for sharing.
dist-epoch•8mo ago
I've seen many YouTube videos claiming that fine tuning can significantly reduce costs or make a smaller model perform like a larger one.

Most of them were not from fine-tuning tools or model sellers.

> how it was fine-tuned - all of the training data that was used

It's not that sophisticated. You just need a dataset of prompts and the expected answer. And obviously a way to score the results, so you can guide the fine tuning.

simonw•8mo ago
I've seen those same claims, in videos and articles all over the place.

Which is why it's so weird that I can't find a convincing live demo to see the results for myself!

dist-epoch•8mo ago
Maybe just give it a go on OpenAI?

An example on how to train (a presumably small) model to call a get_current_weather function: https://platform.openai.com/docs/guides/supervised-fine-tuni...

It's not such a sexy subject, it's mostly done by companies to reduce costs, which is maybe why there is not much written about it.

simonw•8mo ago
That is exactly the problem: I do not need to save money on my LLM calls, so any experiment I do along those lines won't really benefit me very much. I'm deeply curious, but not quite enough to put the work in if I don't have a practical need for it.

I'm constantly surprised at how hard it is to find somebody who can show me a demo! That's why I keep on hassling any company that appears to be selling fine-tuning tooling: if you want people to buy your product, giving them a convincing demo feels like it should be table stakes.

cleverwebble•8mo ago
I can't really show an interactive demo, but my team at my day job has been fine tuning OpenAI models since GPT-3.5 and fine tuning can drastically improves output quality & prompt adherence. Heck, we found you can reduce your prompt to very simple instructions, and encode the style guidelines via your fine tuning examples.

This really only works though if:

1) The task is limited to a relatively small domain (relatively small could probably be misnomer, as most LLMs are trying to solve every-problem-all-at-once. As long as you are having it specialize in a specific field even, FT can help you achieve superior results.) 2) You have high quality examples (you don't need a lot, maybe 200 at most) Quality is often better than quantity here.

Often, distillation is all you need. Eg, do some prompt engineering on a high quality model (GPT-4.1, Gemini-Pro, Claude, etc.) - generate a few hundred examples, optionally (ideally) check for correctness via evaluations, and then fine tune a smaller, cheaper model. The new fine tuned model will not perform as well at generalist tasks as before, but it will be much more accurate at your specific domain, which is what most businesses care about.

jcheng•8mo ago
200 examples at most, really?? I have been led to believe that (tens of) thousands is more typical. If you can get excellent results with that few examples, that changes the equation a lot.
energy123•8mo ago
Probably the general performance keeps deteriorating with more examples, so more is not always better
tuyguntn•8mo ago
> Here's a suggestion: show me a demo!

Yes, yes and yes again!

Also, please don't use GIFs in your demos! It's freaking me out, because the speed of your GIF playback doesn't match my information absorption speed and I can't pause, look closely, go back, I just need to wait the second loop of your GIF

elliotto•8mo ago
Chiming in here to say that I was tasked to implement a fine tuning method for my AI startup and I also couldn't find any actual implemented outputs. There are piles of tutorials and blog posts and extensive documentation on hugging face transformers about the tools provided to do this, but I was unable to find a single demonstration of 'here is the base model output' vs 'here is the fine tuned output'. Doesn't have to be online like you suggested, even a screenshot or text blob showing how the fine tuning affected it would be useful.

I am in a similar boat to you where I have developed a great sense for how the bots will respond to prompting and how much detail and context is required because I've been able to iterate and experiment with this. But have no mental model at all about how fine tuning is meant to perform.

dedicate•8mo ago
Interesting points! I'm always curious, though – beyond the theoretical benefits, has anyone here actually found a super specific, almost niche use case where fine-tuning blew a general model out of the water in a way that wasn't just about slight accuracy bumps?
scosman•8mo ago
Yup! I'll have to write some of these up. I can probably do open datasets and evals too. If you have use cases you'd like to see let me know! Some quick examples (task specific performance):

- fine-tuning improved performance of Llama 70B from 3.62/5 to (worse than Gemma 2B) to 4.27/5 (better than GPT 4.1), as measured by evals

- Generating valid JSON improved from <1% success rate to >95% after tuning

You can also optimize for cost/speed. I often see a 4x speedup and reducing costs by 90%+, while matching task-specific quality.

jampekka•8mo ago
Don't you get valid JSON success rate of 100% with constrained decoding with any model?
genatron•8mo ago
As an example Genatron is made possible by fine-tuning in order to create entire applications that are valid. It's similar to the valid json example, where you want to teach specific concepts through examples to ensure the correct syntactic and semantic outputs.
dist-epoch•8mo ago
Fine tuning is also about reducing costs. If you can bake half the prompt in the model through fine tuning, this can halve the running costs.
ldqm•8mo ago
I found Kiln a few months ago while looking for a UI to help build a dataset for fine-tuning a model on Grapheme-to-Phoneme (G2P) conversion. I’ve contributed to the repo since.

In my G2P task, smaller models were splitting phonemes inconsistently, which broke downstream tasks and caused a lot of retries - and higher costs. I fine-tuned Gemini, GPT-4o-mini, and some LLaMA and Qwen models on Fireworks.ai using Kiln, and it actually helped reduce those inconsistencies

simianwords•8mo ago
Related: what is the best way to augment the model with new knowledge other than at runtime using RAG?
scosman•8mo ago
Context window + prompt caching if you can't use RAG. Can add a lot to long context models, and their needle in haystack metrics keep getting better.

Why can't you use RAG?

simianwords•8mo ago
you lose coherence across chunks of context size. i wish i could spend compute to pre-train on some knowledge.
ijk•8mo ago
Depends on the definition of "knowledge"; there's a lot of factors that go into it. Some of the common approaches are continued/continual pretraining and model editing (https://arxiv.org/pdf/2502.12598).

* Models are bad at learning that A=B implies B=A, let alone more complicated relations; augmenting the dataset with multiple examples with different phrasing/perspectives is important (https://arxiv.org/abs/2404.00213). The frequency that a relation occurs in the dataset affects the results (https://arxiv.org/html/2504.09597v2).

* You have to be able to balance preserving existing knowledge against the new knowledge (https://arxiv.org/abs/2502.14502). There are techniques like making sure your data mix corresponds to the original training data, but new data is primed by existing data so it gets complicated (https://arxiv.org/abs/2504.09522).

* Curriculum training (a la Phi) can be quite effective for training knowledge into base models at the very least.

* Continued pretraining is much more difficult than most finetuning, though it is possible (https://unsloth.ai/blog/contpretraining).

* Model editing of individual facts is possible but tricky because everything is interconnected but the model isn't great at figuring out reciprocal relationships (https://arxiv.org/abs/2310.16218). There's been some slow progress, though I find that few people are aware that it is even possible, despite the progress that has been made (https://github.com/zjunlp/KnowledgeEditingPapers).

The keywords you want are knowledge injection, domain adaptation, continual pretraining, model editing.

simianwords•8mo ago
This is exactly what I was talking about. I wonder why no one has tried to inject a critical code repository (at least 1 million LOC) and compare to common RAG methods?

The ones you have shown here are nice and simple like world cup statistics. Maybe we are nowhere near solving such complicated scenarios?

simonw•8mo ago
"What is the best way to augment the model with new knowledge other than at runtime using RAG?

I'm afraid the answer is "at runtime using RAG".

Don't fall into the trap of assuming that RAG has to mean janky vector embeddings though. There are many different ways to implement RAG. Good old fashioned FTS search (using tools like Elasticsearch or Solar or even PostgreSQL/MySQL/SQLite FTS) it's a lot less complicated and less expensive to set up and can provide extremely good results.

A lot of of the common RAG techniques were put together a couple of years ago when models were less capable and input limits were still around 8000 tokens.

The models today are much cheaper, far better and mostly have 100,000+ token input limits. This opens up all sorts of new RAG possibilities.

I am very excited at the moment by tool-driven RAG: implement a "search" tool for an LLM to use and prompt it to try several iterations on its search terms before it gives up.

o3 and o4-mini do this in ChatGPT with their web search tool and the results are extremely convincing.

simianwords•8mo ago
I agree that RAG does not have to be embeddings, RAG to me simply means augmenting new knowledge at run time no matter the method.

I would like to convince you that RAG may not be ideal and is simply an approximation of real learned data. RAG is inherently constrained by context length which means any understanding has to happen within chunks of size 100k tokens (as you pointed out). Keep in mind that you still lose high level semantic understanding as you increase the prompt token length to 100k even if needle in the haystack type of problems are solved at this level.

RAG introduces a severe limitation in understanding higher level semantic understanding across chunks. For instance, imagine a global variable shared across many modules causing some race conditions. This is extremely hard for RAG because it has to put many random modules in its context to deeply understand how the race condition happens. (to convince myself I must show that the linux codebase benefits from being indexed by an LLM and can solve hard to debug race conditions)

Another situation where RAG fails is where the you don't even know what to put in your context to get the answer. Imagine a prompt like "tell me two movies released in 2025 that are surprisingly similar in terms of story line". Maybe O3 can solve this particular problem but imagine I start adding more constraints?

simonw•8mo ago
Sure, RAG isn't ideal. I don't know of an alternative. Attempting to constantly train it fine-tune entire new models to update their knowledge doesn't appear to be practical - I've not seen anyone demonstrate that working.

I think long context plus tricks with tools is the best solution we have right now.

simianwords•8mo ago
The balance may tip in favour of fine tuning once we have made small breakthroughs in this space. It might be especially useful in enterprise contexts where you can have one model per company trained through all Wiki, code, documentation etc.
simonw•8mo ago
That right there is the thing I'm most skeptical of.

It's so very obviously what every company wants: a custom model fine-tuned on their internal documentation and code.

And yet stories of it actually working are incredibly rare!

The closest I've heard to a success story in that space is Jane's Street who fine-tuned their own model because they use OCaml more than anyone else: https://www.youtube.com/watch?v=0ML7ZLMdcl4

I am confident that any startup today who could provably demonstrate that "we can fine tune a model on your company's internal code and documentation and have it answer questions about them" would have enormous financial success. I'll believe it works when I see that!

ramoz•8mo ago
There really isn't a good tool-calling model in open source, and I don't think the problem is fine-tuning.
jayavanth•8mo ago
The best ones so far are fine-tunes. But I agree those numbers aren't great and we haven't figured out tool-calling yet

https://gorilla.cs.berkeley.edu/leaderboard.html

dist-epoch•8mo ago
Qwen3, Gemma, Mistral are open source and good at tool calling.
briian•8mo ago
I think fine tuning is one of the things that makes verticalised agents so much better than general ones atm.

If agents aren’t specialised then every time they do anything, they have to figure out what to do and they don’t know what data matters, so often just slap entire web pages into their context. General agents use loads of tokens because of this. Vertical agents often have hard coded steps, know what data matters and already know what APIs they’re going to call. They’re far more efficient so will burn less cash.

This also improves the accuracy and quality.

I don't think this effect is as small as people say, especially when combined with the UX and domain specific workflows that verticalised agents allow for.

triyambakam•8mo ago
I have not yet heard of vertical agents. Any good resources?
simonw•8mo ago
I'm still fuzzy on what people mean when they say "agents".
triyambakam•8mo ago
That's because people mean different things. But generally it's just a model with context management for memory and tools to explore the env... I would say Claude Code is an agent
mettamage•8mo ago
Naive question, are there good tutorials/places that teach us to implement RAG and fine tune a model? I don't know if it's even feasible. At the moment I create AI workflows for the company I work at to (semi-)automate certain things. But it's not like I could fine-tune Claude. I'd need my own model for that. But would I need a whole GPU cluster? Or could it be done more easily.

And what about RAG? Is it hard to create embeddings?

I'm fairly new with the AI part of it all. I'm just using full-stack dev skills and some well written prompts.

scosman•8mo ago
Lot's of tools for each of those separately (RAG and fine-tuning). We're working on combining them but it's not ready yet.

You don't need a big GPU cluster. Fine-tuning is quite accessible via both APIs and local tools. It can be as simple as making API calls or using a UI. Some suggestions:

- https://getkiln.ai (my tool): let's you try all of the below, and compare/eval the resulting models

- API based tuning for closed models: OpenAI, Google Gemini

- API based tuning for open models: Together.ai, Fireworks.ai

- Local tuning for open models: https://unsloth.ai (can be run on Google Collab instances if you don't have local Nvidia GPUs).

Usually the building the training set and evaluating the resulting model is the hardest part. Another plug: Kiln support synthetic data gen and evals for these parts.

kaushalvivek•8mo ago
Without concrete examples, this reads like an advertisement.

I am personlly very bullish on post-traning and fine-tuning. This artice doesn't do justice to the promise.

storus•8mo ago
I thought that fine-tuning is no longer being done in the industry, instead transformer adapters like LoRA are being used? Having 1000 fine-tune models for each customer seems too heavy when one can have instead 1000 transformer adapters and swap them during the inference for each batch.

I mean there are tricks like Q-GaLore that allow training LLaMA-7B on a single 16GB GPU but LoRA still seems to be better for production to me.

nahnahno•8mo ago
LoRA and QLoRA are still fine tuning I thought? Just updating a subset of parameters. You are still training a base model that was pre-trained (and possibly fine tuned after).