Hope it was not a downvote-a-bot or autonomous flagman gone rougue ;)
So right now on first visit or page refresh this whole page appears to be nothing at all, not even ultra-light dead text followed by nothing.
This almost never happens.
When it does, unfortunately in those cases the reason is that there are usually no constructive comments :(
The user has to click to the right of your username to see the current number of comments under it or there is no other text at all on the page when it's like this.
If someone else makes a top-level comment (like mine here) it should at least show up as something on the page, because it's not underneath your original top-level text.
Maybe re-submit at another time, which some times or days are expected to be better for gaining generous input for projects compared to plain news.
Probably also a good idea as progress is made to resubmit when you have a major milestone that you are proud of. Especially if it answers some questions even if it draws a few more. The pride in your work shows through and can make the interaction even more engaging than you already are.
Doing what looks like to me like a respectable job on a more difficult project than most would undertake, this should not go un-noticed :)
Georgii007•6mo ago
Why?
A close friend of mine had a terrifying experience with someone she met online — charming on the surface, but turned out to be married and emotionally abusive. When we talked about it, she said: "I just wish there was a way to check if someone else had a bad experience with him."
And there wasn’t. No database. No quiet warnings. Just blind trust.
So I started working on a system where women can:
Search for a man’s profile by name, city, and age
Read anonymous reviews from other women
Set alerts for people they’re about to meet
Share their experience privately and help others stay safe
No public shaming. No full names. Just signals — from real women to real women — to help us date smarter and safer.
The MVP is live (early access): https://safe-spotlight-network.vercel.app/
If you're curious about online trust, anonymity, or building community-first products, I’d love your feedback — especially on:
How to verify data without violating privacy
Abuse prevention
Generalizing this to other social contexts (roommates? job references?)
Thanks for reading. Happy to answer any questions.
easyThrowaway•6mo ago
Georgii007•6mo ago
In the first phase, we are focusing on the female audience because of the specific threats that women face most often in online dating. But we will definitely add the ability to verify for all genders and sexes as well - including men caught in toxic relationships. It's an important and necessary part, and it's already in the plans for upcoming releases.
Thanks again for your kindness and sensible remark
easyThrowaway•6mo ago
The risk of not having similar spaces for men is that they could probably end up the alt-right "man-o-sphere" communities in their stead.
cherryteastain•6mo ago
Also, even if processing people's name age and city without consent were lawful, per GDPR you must remove people's personal information from your app permanently upon request and probably ensure they are never added again.
How do you propose your application will be compatible with GDPR, or will you ban users from adding European men?
Georgii007•6mo ago
Right now, the app is still in pre-MVP stage — no real user data is being collected or processed yet. But as I build this, GDPR compliance is something I’m absolutely planning for before any public launch, especially if the app is ever made available in the EU or UK.
Here’s what I intend to do before launch:
The app won’t allow full names or uniquely identifying info like phone numbers or social links.
Reviews will be pseudonymous and moderated, with clear rules to avoid doxxing or identifiable posts.
I plan to implement a “Right to Erasure” mechanism, so anyone mentioned can request complete removal — and I’m also exploring a way for individuals to opt out preemptively, so future mentions get blocked automatically.
If proper GDPR compliance can’t be guaranteed by launch, I’ll restrict access from EU/UK regions until it’s fully ready.
I want this to be a tool that helps people feel safer — not one that puts others at risk. Privacy and ethics are core to this, and I’m grateful for feedback like yours that keeps the vision grounded and responsible.
discordance•6mo ago
1. If people with an unsafe reputation can opt themselves out, then won't women still be exposed to that unsafe match?
2. Breakups can be messy. How do you prevent false information being reported in?
3. If someone requests their information through GDPR and finds false reports on them, how will you handle the risk of defamation lawsuits?
Georgii007•6mo ago
2. The risks of false information are real. This is one of the most difficult parts of the project. I envision:
- Multi-stage moderation - Plausibility signals (AI filtering, account activity) - Ability to "reply" or mark as "challenged"
3. If someone requests data via GDPR and finds false accusations - they should be able to challenge, track, get a response and get it removed. I plan to build in a dispute resolution mechanism to not only comply with the law, but to ensure fairness for all parties.
I repeat: the project was conceived with good intentions, but these risks are not just hypothetical - they are structural. I am grateful that you have voiced them at this stage. If you have ideas or experience in similar systems - I would be very happy to dialog.
discordance•6mo ago
Georgii007•6mo ago
dns_snek•6mo ago
Will you require hard evidence as a matter of policy before publishing any allegations? That's the only thing that would set you apart from many previous implementations of this idea which always seem to turn into hateful and defamatory platforms.
"Plausibility signals" like AI filtering are only good for filtering outright spam and bot activity, they're useless for determining truthfulness of claims being made. If your goal is to publish truthful information then please take some lessons from the legal system because truth can only be ascertained by analyzing evidence, not by evaluating the accuser's social standing ("account activity") and asking a random village idiot for their opinion on one side of the story ("AI filtering").
And the whole process of "people can respond and ask for it to be taken down" is not good enough because by that time the damage could already be done. People who have done nothing wrong have no reason to proactively monitor and "curate" their public image on sites like these, so they're unlikely to discover false accusations against them until they either experience social consequences of those false allegations or they're lucky enough that someone who knows them well discovers it early and sticks up for them.
cherryteastain•6mo ago
This is not enough. A US IP adding an EU/UK resident is also against GDPR if the added person's PII is involved. I am unsure how you can conclusively check if an added person is a UK/EU resident without committing an even worse GDPR violation, but just IP geoblocking EU/UK regions is not a solution here.
Georgii007•6mo ago
Right now, the project is in its early stages, and these discussions help us understand exactly where the boundaries of what is acceptable and how to build in legal and ethical compliance from the start.
Here's what I plan to put in place: Strict content moderation and technical filters to exclude PII (personally identifiable information).
The right to remove and revoke, with as simple a process as possible, including the ability to auto-alert when you try to post again.
A separate legal page explaining restrictions, moderation principles and responsibilities.
As we develop, consultation with GDPR and digital rights experts before launching anything in the EU/UK.