I'm concerned about uploading sensitive data by mistake. I'm concerned about verifiable results.
I would feel better if my colleagues were using a purpose-built environment than simply stating to use common sense, follow this guide and don't make a mistake.
What's more, they're convinced the way to ensure one ChatAi tool didn't make a mistake is to feed the output into a competing ChatAi tool to check the work. Maybe there's something to this, in which case being able to switch models seems like a desired feature.
Maybe there are others here who are not developing the next LLM, but are implementing them in enterprise?