The argument is that most AI systems fail socially, not technically, because they don’t design the loop that compounds trust over time: transparent boundaries, recoverable errors, and feedback that feels fair to users.
Curious what people here think: does this “trust design” framing resonate with your experience?
Have you seen teams that intentionally engineer trust into their rollout process? Or is this more often a side effect of limited resources in early-stage startups rather than a deliberate constrained strategy?
techblueberry•3h ago
Nope, I used to believe the hype that it was anti-intellectual to say its sucks and it’s useless, or that “this is the worst it will ever be”. but I think sentiment is changing.
But I think the best example is - the phone was invented in 1876. A few years later, there could be a crazy visionary who imagined the iPhone and started iterating, and poured billions of dollars into it and maybe pulled the date in a few years? Maybe we’d get it in 2005 instead of 2008? Look at babbages original computers. If he’d committed to them, would we have gotten the first digital computer dramatically earlier? I assert no, that yes, we can pour billions or even trillions of dollars into this technology, but it’s probably just too early to do so, and in fact, it may make the trust problem worse.
Also, just for clarities sake - I’m drawing a line in the sand and making broad statements, there’s obviously more nuance. There are plenty of uses for the current iteration of AI and obviously slowly iterating is a good strategy. I think what I would say is that we are iterating too quickly. There’s a reason that most computer usage was relegated to “nerds” for a long time. Same with the internet, but we’re not rolling out AI to a few small groups of nerds, we’re rolling it out to everyone, and that won’t dramatically increase long-term adoption or utility. Also, I do think there may actually be a chance that it gets worse before it gets better.
willybraun•3h ago
Curious what people here think: does this “trust design” framing resonate with your experience?
Have you seen teams that intentionally engineer trust into their rollout process? Or is this more often a side effect of limited resources in early-stage startups rather than a deliberate constrained strategy?
techblueberry•3h ago
But I think the best example is - the phone was invented in 1876. A few years later, there could be a crazy visionary who imagined the iPhone and started iterating, and poured billions of dollars into it and maybe pulled the date in a few years? Maybe we’d get it in 2005 instead of 2008? Look at babbages original computers. If he’d committed to them, would we have gotten the first digital computer dramatically earlier? I assert no, that yes, we can pour billions or even trillions of dollars into this technology, but it’s probably just too early to do so, and in fact, it may make the trust problem worse.
Also, just for clarities sake - I’m drawing a line in the sand and making broad statements, there’s obviously more nuance. There are plenty of uses for the current iteration of AI and obviously slowly iterating is a good strategy. I think what I would say is that we are iterating too quickly. There’s a reason that most computer usage was relegated to “nerds” for a long time. Same with the internet, but we’re not rolling out AI to a few small groups of nerds, we’re rolling it out to everyone, and that won’t dramatically increase long-term adoption or utility. Also, I do think there may actually be a chance that it gets worse before it gets better.