I actually think the legacy completion API is superior to the newer chat creation API. The completion API takes all the input as a single string, whereas the chat API expects input as a list of messages. Given today's common use of agentic workflows, the model often has to process a large amount of input—RAG data, function calls, and other metadata—in addition to the user and assistant messages. The conversation itself might be just a small part of the overall prompt.
I’ve found it more intuitive to format the conversation as <conversation>...</conversation> within one unified text prompt, alongside all the other input data.
OpenAI’s new completion API supports both a single large string or a list of messages, which is great. I briefly experimented with the prompting tool in Claude—it essentially builds a big prompt that includes instructions and examples in XML format.
pawanjswal•1d ago
I have seen many LLM devs' encountered this at some point. Good to see that you are not only pointing out the inconsistency but also actively advocating a common benchmark.
lolpanda•1d ago
I’ve found it more intuitive to format the conversation as <conversation>...</conversation> within one unified text prompt, alongside all the other input data.
OpenAI’s new completion API supports both a single large string or a list of messages, which is great. I briefly experimented with the prompting tool in Claude—it essentially builds a big prompt that includes instructions and examples in XML format.