In March, health technology startup HeHealth debuted Calmara AI, an app proclaiming to be “your intimacy bestie for safer sex.” The app was heavily marketed to women, who were told they could upload a picture of their partner’s penis for Calmara to scan for evidence of a sexually transmitted infection (STI). Users would get an emoji-laden “Clear!” or “Hold!!!” verdict — with a disclaimer saying the penis in question wasn’t necessarily free of all STIs.
The reaction Ella Dawson, sex and culture critic, had when she first saw Calmara AI’s claim to provide “AI-powered scans [that] give you clear, science-backed answers about your partner’s sexual health status” can be easily summed up: “big yikes.” She raised the alarm on social media, voicing her concerns about privacy and accuracy. The attention prompted a deluge of negative press and a Los Angeles Times investigation.
The Federal Trade Commission was also concerned. The agency notified HeHealth, the parent company of Calmara AI, that it was opening an investigation into possibly fraudulent advertising claims and privacy concerns. Within days, HeHealth pulled its apps off the market.
HeHealth CEO Yudara Kularathne emphasized that the FTC found no wrongdoing and said that no penalties were imposed. “The HeHealth consumer app was incurring significant losses, so we decided to close it to focus on profitability as a startup,” he wrote over email, saying that the company is now focused on business-to-business projects with governments and NGOs mostly outside the United States.
More and more AI-powered sexual health apps have been cropping up, and there’s no sign of stopping. Some of the new consumer-focused apps are targeted toward women and queer people, who often have difficulties getting culturally sensitive and gender-informed care. Venture capitalists and funders see opportunities in underserved populations — but can prioritize growth over privacy and security.
Not today with the ongoing AI bullshit craze. There are legit AI models that can identify medical conditions, but I’d worry that this product is just an LLM wearing a stethoscope.
Forget about the AI. There is so much wrong with this.
- If it gets it wrong, either false positive or false negative, then there are serious consequences. This means there is no way it can “play it safe” with an answer.
- I don’t trust that it’s not capturing and storing data. The risk of a “highly personal” data being leaked is completely unwarranted.
- It works from a photo, therefore it’s unlikely to pick up much more than you can see by eye. You’d be better off just learning what to look for.
- It won’t detect STIs with no visual symptoms, so provides an entirely false sense of confidence, potentially increasing the risks of those STIs to the general population.
- Let’s say it works perfectly, and the AI algorithm runs completely locally with no data being transferred to the cloud or being captured/stored. Do you want to have someone you don’t trust about being honest about STIs taking a photo of your genitals?
That’s what I can come up with in 10 seconds. Feel free to add to the list, I’m sure it’s not complete…
The Gradient Descent, Hallucination, and Insufficient Training Data jokes just write themselves.
When I had a camera shoved up my fundament an AI was watching the camera feed to learn how to spot potential cancerous growths, precancerous polyps, etc. Lucky AI. Apparently the process is that it scans the feed, highlights on screen areas it wants the radiologist to take another look at, and they then verify if it’s a real issue or nothing to worry about. In that process flow I’m entirely comfortable with it being a second pair of eyes for the radiologist.
Eventually I guess it could replace the radiologist, but I’d want to see a 100% success rate demonstrated over a sufficiently long test period before that could happen.