I wrote a prompt that did batch classification - the prompt contained instructions on how to classify text and 10 input examples for it to classify and it was to return a json string displaying the classifications. It kind of worked by then I realized the individual classification of an individual input was significantly affected by which other 9 inputs it was in the prompt with. In other works the classification was not at all independent. With traditional ML you can do batch classification trivially with each input in the batch being predicted independently. So is this just a limitation of LLMs? You have to classify inputs one LLM call at a time?
Comments
al_borland•5h ago
I tried using an LLM to give me various information for a given input. I gave in the parameters of what I was looking for, then told it what data I would provide with each subsequent message for it to process independently based on the initial directive. After 4 or 5 I noticed an error, and things quickly went downhill, as the context from various entries was mixing in with the current one and it started to get more and more confused as I tried to point it back in the right direction. I ended up giving up and doing it all manually.
It sounds like we may have hit similar limits, using slightly different means to get there.
al_borland•5h ago
It sounds like we may have hit similar limits, using slightly different means to get there.