It has built-in integrations to many data sources (like Plaid, Nova, Teller, any HTTP endpoint in fact) with caching and lazy evaluation for cost savings.
Happy to talk about it and share ideas of what would make it even more useful.
It has built-in integrations to many data sources (like Plaid, Nova, Teller, any HTTP endpoint in fact) with caching and lazy evaluation for cost savings.
Happy to talk about it and share ideas of what would make it even more useful.
To some extent, you've sanitized the business of ruining people's lives. This is the meat grinder. YOU as its creator don't fear it because it's your pet basilisk.
But make no mistake, your pet is a killer monster the only real purpose of which is to but a rubber stamp on every kind of bias while removing accountability for doing violence to real people's lives. If you get rich with this, at least now you can't say nobody told you where the money was coming from.
Think about it. What would you say if it made a mistake and ruined your life? "We're sorry, the AI says you're not allowed to have credit anymore." "We're sorry, you met the criteria to be targeted for extermination by a drone." "We're sorry, the AI scored you 8/10 on the deport-with-no-due-process chart, so off to no-rights terrorist prison."
You'd care if it was you, but the fact is that you won't have any way of knowing when your client uses your product to ruin or even end a life. A counter won't tick up by one. You'll just be there thinking "another happy customer." Now that's vom-inducing.
You asked how you could make it better. You need to think of yourself as having two sets of customers. One set is the decision-makers, and the other is the people being decided on. If you were to be successful it would be by bridging that gap between those two groups. I can only imagine that it would involve pairing decision-making with systems of accountability, transparency, and justice, as we do in governments -- as is necessary to do in any civic system that wields power.
To expand:
- Accountability: Who ensures the rules are meant to be fair not discriminatory? What do you do when the real problem is that the rules need to change?
- Transparency: Can people understand why a decision was really made, or is how the system sees them a secret? If your goal is truly to help people then you need to enable them to help themselves by allowing them to understand the nature of their relationship to the decision-maker.
- Justice: When the system gets it wrong, there must be recourse. If someone is about to lose their livelihood over a paperwork error, there needs to be a system of judicial review that they can engage. The real world is messy, and if you want to create value in the real world you need to engage with the mess instead of trying to oppress it with a system of absolute rules.
Beyond that, it looks like this system gives you full audit logs and lets you backtest things which means you should be able to analyse the impact of a change and have a record of what decisions were made and how they were made.
genuine question: hasn't that been done already with machine learning?
edit: your point is totally valid and I agree, but this type of cruelty is not new
I think you should take a moment to try and understand the product before attacking so strongly.
mattbee•5h ago
AStonesThrow•5h ago
stuaxo•33m ago