Every one of the comments pushing back implied that AI image models will never be sufficient for the poster's standards.
In the two months since I began working on this project, model quality has increased by an order of magnitude while costs have done the opposite.
I was also able to use LLMs to launch a full featured, production-grade software service in two months that survived the Hacker News "hug of death" without so much as the blink of an eye.
Why is a significant subset of HN so confident when it comes to their view that the exponential improvement curve will not apply to this particular technology? Isn't it folly to bet against the advancement of technology?
This is especially confusing to me when hundreds of billions of dollars and PhDs and professors are thrown at the problem, which has a clear financial incentives aligned with finding the best solution. (Obligatory "this isn't nuclear fusion!")
Only one argument has made sense to me: AI lowers the bar for releasing stuff into the wild. This means you'll see more things and those things will be, on average, worse in quality than what you saw before. The argument leads on to say this pent up, subconscious distaste for AI-related crap is what leads to pushback. Fair enough.
The rest of the arguments that make sense to me follow a similar structure but are fraught with logical fallacy --- "AI is replacing jobs" or "AI is destroying the earth" are very interesting topics that should be investigated, revisited and reviewed periodically, but ultimately these claims speak against the idea of allowing AI to be developed and used; they say nothing of its quality.
AI models have added tremendous value to my life already. I've been glad to pay for it all. We are on a clear "up and to the right" trajectory in terms of quality. What gives? Why does a significant subset of Hacker News think quality is not going to go up and to the right?
overu589•5h ago