This is written for the 3 models (Sonnet, Haiku, Opus 3). While some lessons will be relevant today, others will not be useful or necessary on smarter, RL’d models like Sonnet 4.5.
> Note: This tutorial uses our smallest, fastest, and cheapest model, Claude 3 Haiku. Anthropic has two other models, Claude 3 Sonnet and Claude 3 Opus, which are more intelligent than Haiku, with Opus being the most intelligent.
cjbarber•1h ago
Yes, Chapters 3 and 6 are likely less relevant now. Any others? Specifically assuming the audience is someone writing a prompt that’ll be re-used repeatedly or needs to be optimized for accuracy.
babblingfish•1h ago
The big unlock for me reading this is to think about the order of the output. As in, ask it to produce evidence and indicators before answering a question. Obviously I knew LLMs are a probabilistic auto complete. For some reason, I didn't think to use this for priming.
adastra22•33m ago
Furthermore, the opposite behavior is very, very bad. Ask it to give you an answer and justify it, it will output a randomish reply and then enter bullshit mode rationalizing it.
Ask it to objectively list pros and cons from a neutral/unbiased perspective and then proclaim an answer, and you’ll get something that is actually thought through.
beering•23m ago
Note that this is not relevant for reasoning models, since they will think about the problem in whatever order it wants to before outputting the answer. Since it can “refer” back to its thinking when outputting the final answer, the output order is less relevant to the correctness. The relative robustness is likely why openai is trying to force reasoning onto everyone.
gdudeman•1h ago
> Note: This tutorial uses our smallest, fastest, and cheapest model, Claude 3 Haiku. Anthropic has two other models, Claude 3 Sonnet and Claude 3 Opus, which are more intelligent than Haiku, with Opus being the most intelligent.
cjbarber•1h ago