We've fine tuned Google’s SigLip vision model on ~47k images from 5 datasets (HAM10000, BCN20000, Fitzpatrick17k, PAD-UFES-20, DDI) to classify lesions as benign/malignant, estimate specific conditions (10-class), and provide triage recommendations.
Users can upload or take an image for a quick analysis. Results include risk scores, urgency tiers, and a condition estimate with a clear disclaimer that this is not an official diagnostic tool.
It’s trained on clinical, dermoscopic, and smartphone photos to handle real world image quality. Achieves 0.98 AUC, correctly ranking a malignant lesion above a benign one 98% of the time with a <10% accuracy gap across Fitzpatrick skin tones, meaning it doesn't disproportionately fail on darker skin.
Our goal is to provide a proficient screening tool for low resource environments and we're looking for feedback on how we can improve the user experience further and any clinical blind spots we may have missed (for any of the dermatologists out there).
roshangill•1h ago
Users can upload or take an image for a quick analysis. Results include risk scores, urgency tiers, and a condition estimate with a clear disclaimer that this is not an official diagnostic tool.
It’s trained on clinical, dermoscopic, and smartphone photos to handle real world image quality. Achieves 0.98 AUC, correctly ranking a malignant lesion above a benign one 98% of the time with a <10% accuracy gap across Fitzpatrick skin tones, meaning it doesn't disproportionately fail on darker skin.
Our goal is to provide a proficient screening tool for low resource environments and we're looking for feedback on how we can improve the user experience further and any clinical blind spots we may have missed (for any of the dermatologists out there).
Thanks for taking the time.