With Anthropic facing criticism for complaining about Chinese companies distilling its models and the Pentagon's use of its AI, there's been a lot of interest in tools that let users control their own models. We developed Abliteration.ai (https://abliteration.ai). Curious what HN thinks about heavy-handed safety policies vs developer-controlled LLMs?
Comments
verdverm•57m ago
ollama already exists and no one is going to pay your exorbitant prices for a different hosted solution
verdverm•57m ago