frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

RunCat 365

https://github.com/Kyome22/RunCat365
1•Shinobuu•1m ago•1 comments

Apple expands supply chain with $500M commitment to American rare earth magnets

https://www.apple.com/newsroom/2025/07/apple-expands-us-supply-chain-with-500-million-usd-commitment/
1•haunter•2m ago•0 comments

Show HN: MCP Adapter – Universal gateway for AI tool coordination

https://github.com/startakovsky/mcp-adapter
1•tartakovsky•4m ago•0 comments

Ask HN: Is someone else also having some issue posting comments in Reddit?

1•hassanahmad•7m ago•1 comments

Open-Source BCI Platform with Mobile SDK for Rapid Neurotech Prototyping

https://www.preprints.org/manuscript/202507.1198/v1
1•GaredFagsss•8m ago•0 comments

Why 7 hours of sleep feels different in Japan vs. America

https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/viral-post-breaks-down-why-7-hours-of-sleep-feels-different-in-japan-vs-america/articleshow/122485848.cms
1•e2e4•9m ago•0 comments

Hijri Calendar for the Modern World

https://hijricalendar.info
1•guccibase•10m ago•0 comments

Could AI slow science? Confronting the production-progress paradox

https://www.aisnakeoil.com/p/could-ai-slow-science
1•randomwalker•15m ago•0 comments

Tesla engineer admits Tesla didn't maintain Autopilot crash records before 2018

https://electrek.co/2025/07/16/tesla-engineer-admits-tesla-didnt-maintain-autopilot-crash-records-amid-trial-over-fatal-crash/
3•TheAlchemist•16m ago•1 comments

Marvel Heroes Height Comparison [video]

https://www.youtube.com/watch?v=flp4jBtKtp4
1•ohjeez•18m ago•0 comments

When novels mattered– NY Times

https://www.nytimes.com/2025/07/10/opinion/literature-books-novelists.html
2•richardatlarge•21m ago•1 comments

New AI agent that clicks, types, and responds – without any code changes

https://smart.sista.ai/
1•mahmoudzalt•22m ago•0 comments

The New Beeper

https://blog.beeper.com/2025/07/16/the-new-beeper/
2•nimar•26m ago•0 comments

Why climate change alarmism failed

https://www.washingtonexaminer.com/in_focus/3471182/why-climate-change-alarmism-failed/
1•bilsbie•26m ago•1 comments

In 2000 a Man Claimed to Be from 2036. Some Predictions Have Been Coming True

https://www.news18.com/viral/in-2000-this-man-claimed-to-be-from-2036-some-of-his-predictions-have-been-coming-true-ws-ab-9230406.html
3•austinallegro•29m ago•2 comments

Crates.io Implements Trusted Publishing Support

https://socket.dev/blog/crates-launches-trusted-publishing
1•feross•32m ago•0 comments

'It's just better ' Trump says Coca-Cola to change key US ingredient

https://www.bbc.com/news/articles/czxe59zl8qzo
6•djkivi•34m ago•4 comments

Show HN: A 'Choose Your Own Adventure' Written in Emacs Org Mode

https://tendollaradventure.com/sample/
9•dskhatri•35m ago•1 comments

Videos from the Amazon Reveal an Unexpected Animal Friendship

https://www.nytimes.com/2025/07/15/science/ocelots-opossums-friends-video.html
1•Petiver•41m ago•0 comments

Show HN: Tech Debt Game – Launch a programming language before the deadline

https://techdebtgame.com
2•kyrylo•44m ago•0 comments

I improved funny-bunnies.fleo.at and it is my birthday

https://funny-bunnies.fleo.at
1•interbr•47m ago•0 comments

Tsunami warning issued in Southern Alaska after 7.3 magnitude earthquake

https://www.tsunami.gov/
6•notmysql_•50m ago•0 comments

Babies made using three people's DNA are born free of mitochondrial disease

https://www.bbc.com/news/articles/cn8179z199vo
6•1659447091•50m ago•2 comments

Marin – open lab for building foundation models

http://marin.community/
1•andsoitis•51m ago•0 comments

Claude Is Back on Windsurf

https://twitter.com/windsurf_ai/status/1945599013954490523
14•agtestdvn•51m ago•2 comments

Delivering the Missing Building Blocks for Nvidia CUDA Kernel Fusion in Python

https://developer.nvidia.com/blog/delivering-the-missing-building-blocks-for-nvidia-cuda-kernel-fusion-in-python/
2•ashvardanian•51m ago•1 comments

Project Servfail: One Year In

https://sdomi.pl/weblog/25-servfail-first-year/
2•LorenDB•54m ago•0 comments

Lessons from an Olive Oil Sommelier

https://www.thetimes.com/life-style/luxury/article/lessons-from-an-olive-oil-sommelier-times-luxury-jg95mqvxs
1•austinallegro•56m ago•0 comments

Mark Cuban: Why AI Will Create More Jobs, Not Fewer

https://www.aol.com/billionaire-mark-cuban-why-ai-182508004.html
2•Bluestein•58m ago•1 comments

Anthropic hired back two of its employees – 2 weeks after leaving for Cursor

https://www.theverge.com/ai-artificial-intelligence/708521/anthropic-hired-back-two-of-its-employees-just-two-weeks-after-they-left-for-a-competitor
5•alwillis•1h ago•0 comments
Open in hackernews

PyPI Prohibits inbox.ru email domain registrations

https://blog.pypi.org/posts/2025-06-15-prohibiting-inbox-ru-emails/
113•miketheman•3h ago

Comments

Scene_Cast2•3h ago
Oh hey, I was the person who reported this.
mananaysiempre•2h ago
I have to say that I don't understand the approach. On one hand, addresses @inbox.ru are administered by Mail.Ru, the largest Russian free email host (although I have the impression that its usage is declining), so quite a few (arguably unwise) real people might be using them (I’ve actually got one that I haven’t touched in a decade). On the other, the process for getting an address @inbox.ru is identical to getting one @mail.ru and IIRC a couple of other alternative domains, but only this specific one is getting banned.
takipsizad•2h ago
pypi has blocked signups from outlook before. I don't think they care about the impact it creates
dewey•2h ago
I know a bunch of sites who do that and the problem is usually that register emails get flagged by outlook and never arrive, causing a lot of support burden. Easier to then nudge people into the direction of Gmail or other providers that don't have these issues.
takipsizad•2h ago
it was in the context of blocking email providers because of malicious mass signups (https://blog.pypi.org/posts/2024-06-16-prohibiting-msn-email...)
jrockway•1h ago
I've been down that road before. Blocking Outlook and Protonmail filters out 0% of legitimate users and 75% of bots. You do what you can so you're not always 1 step behind.
miketheman•1h ago
Thank you!
f311a•1h ago
Do you have special access or such thing can be tracked from outside somehow? Could be a fun project to detect this kind of abusive behavior automatically
ajross•2h ago
The whole model is broken. The NPM/PyPI idea (vscode extensions got in similar trouble recently) of "we're just a host, anyone who wants to can publish software through us for anyone in the world to use with a single metaphorical click" is just asking for this kind of abuse.

There has to be a level of community validation for anything automatically installable. The rest of the world needs to have started out by pulling and building/installing it by hand and attesting to its usefulness, before a second level (e.g. Linux distro packagers) decide that it's good software worth supplying and supporting.

Otherwise, at best the registries end up playing whack-a-mole with trickery like this. At worst we all end up pulling zero days.

jowea•2h ago
And who is going to do all this vetting and with what budget?
ajross•2h ago
Right now: we are. And we're collectively paying too much for a crap product as it stands.

Debian figured this out three decades ago. Maybe look to them for inspiration.

zahlman•2h ago
> And we're collectively paying too much for a crap product as it stands.

Last I checked, we pay $0 beyond our normal cost for bandwidth, and their end of the bandwidth is also subsidized.

notatallshaw•1h ago
If you want to offer a PyPI competitor where your value is all packages are vetted or reviewed nothing stops you, the API that Python package installer tools to interact with PyPI is specified: https://packaging.python.org/en/latest/specifications/simple...

There are a handful of commercial competitors in this space, but in my experience this ends up only being valuable for a small % of companies. Either a company is small enough and it wants to be agile and it doesn't have time for a third party to vet or review packages they want to use. Or a company is big enough that it builds it's own internal solution. And single users tend to get annoyed when something doesn't work and stop using it.

ajross•21m ago
Right. That's the economic argument: hosting anonymously-submitted/unvetted/insecure/exploit-prone junkware is cheap. And so if you have a platform you're trying to push (like Python or Node[1]) you're strongly incentivized to root your users simply because if you don't your competitors will.

But it's still broken.

[1] Frankly even Rust has this disease with the way cargo is managed, though that remains far enough upstream of the danger zone to not be as much of a target. But the reckoning is coming there at some point.

em-bee•20m ago
that's like suggesting someone complaining about security issues should fork libxml or openssl because the original developers don't have enough resources to maintain their work. the right answer is that as users of those packages we need to pool our resources and contribute to enable the developers to do a better job.

for pypi that means raising funds that we can contribute to.

so instead of arguing that the PSF doesn't have the resources, they should go and raise them. do some analysis on what it takes, and then start a call for help/contributions. to get started, all it takes is to recognize the problem and put fixing it on the agenda.

woodruffw•12m ago
> so instead of arguing that the PSF doesn't have the resources, they should go and raise them

The PSF has raised resources for support; the person who wrote this post is working full-time to make PyPI better. But you can't staff your way out of this problem; PyPI would need ~dozens of full time reviewers to come anywhere close to a human-vetted view of the index. I don't think that's realistic.

perching_aix•2h ago
Could force package publishers to review some number of other random published packages every so often. (Not a serious pitch.) Wouldn't create any ongoing extra cost (for them) I believe?
akerl_•2h ago
Do you have a serious pitch?
perching_aix•2h ago
Not really. The people who have an actual direct stake in this can go make that happen, I'm sure they're much better positioned to do so anyhow. For me, it's a fun thing to ponder, but that's all.
akerl_•2h ago
It looks like they are deciding how to approach this. The article you’re commenting on is about how they identified malicious behavior and then blocked that behavior.

It seems odd to pitch suggestions for other things they ought to do but then couch it with “well I’m not being serious” in a way that deflects all actual discussion of the logistics of your suggestion.

perching_aix•2h ago
Yeah, so I've read. Good for them, I suppose.

> in a way that deflects all actual discussion of the logistics of your suggestion

You seem to be mistaken there: I very much welcome a discussion on it. Keyword being "discussion". Just let's not expect an outcome anything more serious than "wow I sure came up with something pretty silly / vaguely interesting". Or put forward framings like "I'm telling them what to do or what not to do".

em-bee•1h ago
not reviewing submissions is a big problem. i know i can trust linux distributions because package submissions are being reviewed. and especially becoming a submitter is an involved process.

if anyone can just sign up then how can i trust that? being maintained by the PSF they should be able to come up with the funding to support a proper process with enough manpower to review submissions. seems rubygems suffers from the same problem, and the issues with npm are also well known.

this is one of those examples where initially these services were created with the assumption that submitters can be trusted, and developers/maintainers work without financial support. linux distributions managed to build a reliable review process, so i hope these repositories will eventually be able to as well.

woodruffw•1h ago
> not reviewing submissions is a big problem. i know i can trust linux distributions because package submissions are being reviewed. and especially becoming a submitter is an involved process.

By whom? I've had a decent number of projects of mine included in Linux distributions, and I don't think the majority of my code was actually reviewed for malware. There's a trust relationship there too, it's just less legible than PyPI's very explicit one.

(And I'm not assigning blame for that: distros have similar overhead problems as open source package indices do. I think they're just less visible, and people assume lower visibility means better security for some reason.)

em-bee•1h ago
which distributions? and did you submit the packages yourself or did someone else from the distribution do the work?

yes, there is a trust relationship, but from what i have seen about the submission process in debian, you can't just sign up and start uploading packages. a submitter receives mentoring and their initial packages are reviewed until it can be established that the person learned how to do things and can be trusted to handle packages on their own. they get GPG keys to sign the packages, and those keys are signed by other debian members. possibly even an in person meeting is required if the person is not already known to their mentors somehow. every new package is vetted too, and only updates are trusted to the submitter on their own once they completed the mentoring process. fedora and ubuntu should be similar. i don't know about others. in the distribution where i contributed (foresight) we only packaged applications that were known and packaged in other distributions. sure, if an app developer went rogue, we might not have noticed, and maybe debian could suffer from the same fate but that process is still much more involved than just letting anyone register an account and upload their own packages without any oversight at all.

jowea•48m ago
I've contributed some packages to NixOS, I didn't do code review and as far as I can tell nothing told me I had to. I assume that if I had said the code was hosted at mispeledwebsite.co.cc/foo in the derivation instead of github.com/foo/foo or done something obviously malicious like that the maintainers would have sanity checked and stopped me, but I don't think anyone does code review for a random apparently useful package. And if github.com/foo/foo is malicious code then it's going to go right through.

And isn't the Debian mentoring and reviewing merely about checking if the package is properly packaged into the Debian format and properly installs and includes dependencies etc?

I don't think there is anything actually stopping some apparently safe code from ending up in Linux distros, except the vague sense of "given enough eyeballs, all bugs are shallow", i.e. that with everyone using the same package, someone is going to notice something, somehow.

woodruffw•40m ago
> did someone else from the distribution do the work?

Someone else.

To be clear: I find the Debian maintainers trustworthy. But I don't think they're equipped to adequately review the existing volume of a packages to the degree that I would believe an assertion of security/non-maliciousness, much less the volume that would come with re-packaging all of PyPI.

(I think the xz incident demonstrated this tidily: the backdoor wasn't caught by distro code review, but by a performance regression.)

ajross•48m ago
> I don't think the majority of my code was actually reviewed for malware.

That's not the model though. Your packages weren't included ab initio, were they? They were included once a Debian packager or whoever decided they were worth including. And how did that happen? Because people were asking for it, already having consumed and contributed and "reviewed" it. Or if they didn't, an upstream dependency of theirs did.

The point is that the process of a bunch of experts pulling stuff directly from github in source form and arguing over implementation details and dependency alternatives constitutes review. And quite frankly really good review relative to what you'd get if you asked a "security expert" to look at the code in isolation.

It's not that it's impossible to pull one over on the global community of python developers in toto. But it's really fucking hard.

woodruffw•1h ago
I don't think the model is broken; a latent assumption within the model has always been that you vet your packages before installing them.

The bigger problem is that people want to have their cake and eat it too: they want someone else to do the vetting for them, and to receive that added value for no additional cost. But that was never offered in the first place; people have just sort of assumed it as open source indices became bigger and more important.

ajross•53m ago
> a latent assumption within the model has always been that you vet your packages before installing them

That is precisely the broken part. There are thousands of packages in my local python venv. No, I didn't "vet" them, are you serious? And I'm a reasonably expert consumer of the form!

jowea•45m ago
Just have faith in Linus' Law.
woodruffw•36m ago
On re-read, I think we're in agreement -- what you're saying is "broken" is me saying "people assuming things they shouldn't have." But that's arguably not a reasonable assumption on my part either, given how extremely easy we've made it to pull arbitrary code from the Internet.
extraduder_ire•45m ago
Has anyone tried calculating pagerank numbers for such packages, since so many of them depend on other packages, and most of these repositories report install counts?

This is easy to game, and in some ways has been pre-gamed. So it wouldn't really be a measure of validation, but would be interesting to see.

nerevarthelame•2h ago
This is the first time I've heard of slopsquatting, but it does seem like a major and easily exploitable risk.

However, blocking an email domain will dissuade only the lowest effort attacker. If the abusers think slopsquatting is effective, they'll easily be able to find (or create) an alternative email provider to facilitate it.

And assuming that the attacks will persist, sometimes it's better to let them keep using these massive red flags like an inbox.ru email so that it remains a reliable way to separate the the fraudulent from legitimate activity.

halJordan•1h ago
Of course this is true. It's the worst reason to denigrate a proactive measure. Speeders buy radar detectors. Wife beaters buy their wife long sleeves. This complaint is levied all the time by everyone which makes it low effort and not useful.
genidoi•1h ago
The problem with using random real world situations as analogies for niches within Software Engineering is that they're not only (almost) ways wrong, but always misrepresentative of the situation in it's entirety
reconnecting•2h ago
'tirreno guy' here.

You can use open-source security analytics (1) to detect fraudulent accounts instead of blocking domain names. Blocking domains only shows your system is fragile and will likely just shift the attackers to use other domains.

Feel free to contact us if you need assistance with setup.

(1) https://github.com/tirrenotechnologies/tirreno

PokemonNoGo•2h ago
Odd installation steps.
reconnecting•2h ago
Can you elaborate, please?
kassner•2h ago
composer install should be pretty much what one needs nowadays. Any installing scripts (although you really shouldn’t) can also be hooked into it.
lucb1e•54m ago
This requires running the install scripts with your shell permissions rather than with the webserver's permissions, if I'm not mistaken. I could see why one might prefer the other way, even if shared hosting is less common nowadays and shells more often an option
snickerdoodle12•2h ago
The instructions aren't all that unusual for PHP software, especially those that target shared hosting, but are unusual compared to most other software.

> Download a zip file and extract it "where you want it installed on your web server"

The requirements mention apache with mod_rewrite enabled, so "your web server" is a bit vague. It wouldn't work with e.g. `python -m http.server 8000`. Also, most software comes bundled with its own web server nowadays but I know this is just how PHP is.

> Navigate to http://your-domain.example/install/index.php in a browser to launch the installation process.

Huh, so anyone who can access my web server can access the installation script? Why isn't this a command line script, a config file, or at least something bound to localhost?

> After the successful installation, delete the install/ directory and its contents.

Couldn't this have been automated? Am I subject to security issues if I don't do this? I don't have to manually delete anything when installing any other software.

reconnecting•2h ago
This is not something specific to tirreno, as it's the usual installation process of any PHP application.

If there is an example of another approach, I will gladly take it into account.

snickerdoodle12•28m ago
> as it's the usual installation process of any PHP application

Maybe a decade ago. Look into composer.

kstrauser•2h ago
I'll side with you here. This gives attackers a huge window of time in which to compromise your service and configure it the way they want it configured.

In my recent experience, you have about 3 seconds to lock down and secure a new web service: https://honeypot.net/2024/05/16/i-am-not.html

lucb1e•57m ago
Wut? That can't have been a chance visit from a crawler unless maybe you linked it within those 3 seconds of creating the subdomain and the crawler visited the page it was linked from in that same second, or you/someone linked to it (in preparation) before it existed and bots were already constantly trying

Where did you "create" this subdomain, do you mean the vhost in the webserver configuration or making an A record in the DNS configuration at e.g. your registrar? Because it seems to me that either:

- Your computer's DNS queries are being logged and any unknown domains immediately get crawled, be it with malicious or white-hat intent, or

- Whatever method you created that subdomain by is being logged (by whoever owns it, or by them e.g. having AXFR enabled accidentally for example) and immediately got crawled with whichever intent

I can re-do the test on my side if you want to figure out what part of your process is leaky, assuming you can reproduce it in the first place (to within a few standard deviations of those three seconds at least; like if the next time is 40 seconds I'll call it 'same' but if it's 4 days then the 3 seconds were a lottery ticket -- not that I'd bet on those odds to deploy important software, but generally speaking about how aggressive-or-not the web is nowadays)

kstrauser•11m ago
Consensus from friends after I posted that is that attackers monitor the Let's Encrypt transparency logs and pounce on new entries the moment they're created. Here I was using Caddy, which by default uses LE to create a cert on any hosts you define.

I can definitely reproduce this. It shocked me so much that I tried a few times:

1. Create a new random hostname in DNS.

2. `tail -f` the webserver logs.

3. Define an entry for that hostname and reload the server (or do whatever your webserver requires to generate a Let's Encrypt certificate).

4. Start your stopwatch.

LeifCarrotson•2h ago
> Huh, so anyone who can access my web server can access the installation script?

"Obviously", the server should not be accessible from the public Internet while you're still doing setup. I assume it should still behind a firewall and you're accessing it by VPN. Only after you're happy with all the configuration and have the security locked down tight would you publish it to the world. Right?

snickerdoodle12•29m ago
Obviously you should lock it down. I'm just going off these instructions and how they might be interpreted.
pests•2h ago
Id say it’s big standard for php apps and have been for awhile. Wordpress has a similar install flow. Docker images are provided tho.
reconnecting•2h ago
Yes, Matomo/Piwik, WordPress, and ProcessWire have more or less the same installation steps, but maybe we missed something along the way.
theamk•2h ago
Totally normal for PHP software, and that's a primary reason of why PHP apps have such a bad security reputation. Note:

- The application code itself and system configs are modifiable by the web handler itself. This is needed to allow web-based "setup.php" to work, but also means that any sort of RCE is immediately "fatal" - no need for kernel/sandbox exploit, if you can get PHP to execute remote code you can backdoor existing files as much as you want.

- The "logs", "tmp", "config" etc.. directories are co-located with code directory. This allows easy install via unzip, but means that the code directory must be kept accessible while operation. It's not easy to lock it down if you want to prevent possible backdoors from previous options.

Those install methods have been embraced by PHP community and make exploits so much easier. That's why you always hear about "php backdoors" and not about "go backdoors" or "django backdoors" - with other languages, you version-upgrade (possibly automatically) and things work and exploits disappear. With PHP, you version upgrade .. by extracting the new zip over the same location. If you were hacked, this basically keeps all the hacks in place.

Kinda weird to see this from some self-claimed "security professionals" though, I thought they'd know better :)

reconnecting•1h ago
Fair critique on traditional PHP deployment.

However tirreno shouldn't be public-facing anyway. Production apps forward events via API on local network, security teams access dashboard over VPN.

Perhaps we will add this recommendation to the documentation to avoid any confusion. Thanks for the clarification.

PokemonNoGo•1h ago
I kinda understood I was missing "something" when I commented but I haven't used any PHP for over a decade and honestly it looked very well you said the rest... Thanks for the clarification. Very unfamiliar with modern PHP.
lucb1e•1h ago
What did you think you had missed? I'm not understanding

> but I haven't used any PHP for over a decade

This isn't modern PHP, this is the traditional installation method that I used also a decade ago. The only thing that could be older about it is to have a web-cron instead of a proper system cron line. Modern PHP dependency installation is to basically curl|bash something on the host system (composer iirc) rather than load in the code under the web server's user and running the install from there, as this repository suggests. Not that the parent comment is wrong about the risks that still exist in being able to dynamically pull third-party code this way and hosting secrets under the webroot

reconnecting•1h ago
Correct, this isn't modern PHP. We aimed to keep overall code dependencies around ~10, and with modern frameworks this number would be multiplied heavily.
lucb1e•1h ago
Care to elaborate? They seem bog-standard to me
lucb1e•1h ago
Blocking providers makes sense since they can talk to the human that is doing the abuse. It's their customer after all

Like with IP ranges that send a lot of spam/abuse, it's the provider's space in the end. If the sender has no identification (e.g. User-Agent string is common for http bots) and the IP space owner doesn't take reasonable steps, the consequence is (imo) not to guess who may be human and who may be a bot, but to block the IP address(es) that the abuse is coming from. I remember our household being blocked once when I, as a teenager, bothered a domain squatter who was trying to sell a normal domain for an extortionary price. Doing a load of lookups on their system, I couldn't have brought it down from an ADSL line but apparently it was an unusual enough traffic spike to get their attention, as was my goal, and I promptly learned from the consequences. We got unblocked some hours after my parent emailed the ISP saying it wouldn't happen again (and it hasn't)

You don't have to look very far on HN to see the constant misclassifications of people as bots now that all the blocking has gotten seven times more aggressive in an attempt to gatekeep content and, in some cases, protect from poorly written bots that are taking it out on your website for some reason (I haven't had the latter category visit my website yet, but iirc curl/Daniel mentioned a huge outbound traffic volume to one scraper)

reconnecting•1h ago
I like the part about leaving the neighborhood blocked from internet access. Did neighbours find out that it was because of you?

However, email accounts could be stolen, and this makes the email provider a victim as well.

This particular case sounds very simple, and I'm quite confident that if we dig further, it's highly possible that all accounts use some pattern that would be easy to identify and block without hurting legitimate users.

lucb1e•52m ago
Neighbors? No, household; the ISP can't see into the house's network which MAC address did it, so they blocked the subscriber line that served my parents' household (partially, at least: you could still visit info pages about the block on their website and contact them by email)

Edited to add:

> this makes the email provider a victim as well

Sure, but they have to deal with a hijacked account anyway, better to tackle it at the source. I'm also not saying to block the whole provider right away, at least not if you can weather the storm for a business day while you await a proper response from their side, just to use blocks when there is nobody steering the ship on their end

lysace•2h ago
I don't understand why this is newsworthy. Spam never ends.
perching_aix•2h ago
Because of:

> See a previous post for a previous case of prohibiting a popular email domain provider.

lysace•2h ago
That was outlook.com/hotmail.com. So? Incompetent/malicious/disengaged mail providers come in all shapes and forms.
perching_aix•2h ago
The implication is that this other email host also being one of the popular ones means there'll be a more widespread user impact than when they block smaller providers. So just like with Outlook, they put out this statement on why they're doing this.
lysace•2h ago
Ah, I see your point.

Although: I don't think the kind of developers that use low quality email providers like that follow HN.

Edit: Remember those 7+ hours back in 1999 when all Microsoft Hotmail accounts were wide open for perusal?

https://time.com/archive/6922796/how-bad-was-the-hotmail-dis...

> Yesterday a Swedish newspaper called Expressen published the programmer’s work, a simple utility designed to save time by allowing Hotmail users to circumvent that pesky password verification process when logging into their accounts. The result? As many as 50 million Hotmail accounts were made fully accessible to the public. Now that the damage has been done, what have we learned?

> It wasn’t until the lines of code appeared in Expressen that people realized how vulnerable Hotmail really was. The utility allowed anybody who wanted to to create a Web page that would allow them log into any Hotmail account. Once the word was out, dozens of pages such as this one were created to take advantage of the security hole. Unfortunate programmers at Microsoft, which owns Hotmail, were rousted out of bed at 2 AM Pacific time to address the problem. By 9 AM Hotmail was offline.

https://www.theregister.com/1999/08/30/massive_security_brea...

https://www.theguardian.com/world/1999/aug/31/neilmcintosh.r...

https://www.salon.com/1999/09/02/hotmail_hack/

reconnecting•2h ago
Online fraud will never end, but it is possible to make it much more expensive and shift attackers to other victims.
nzeid•2h ago
I don't understand how a mere account signup is the bar for publishing packages. Why not queue the first few publishes on new accounts for manual review?
stavros•2h ago
Probably because that would be too expensive for PyPI.
akerl_•2h ago
Who would do the manual review?
vips7L•1h ago
A staffer from the Python foundation? This is how maven central works. Someone physically verifies that you own the reverse domain of your package.
akerl_•1h ago
That’s basically no validation at all. Python doesn’t even have that kind of namespacing to need to validate.

The kind of validation being discussed here would take way more than “a staffer”.

nzeid•46m ago
I mean... don't let perfect be the enemy of good?

I'm insisting that even the barest minimum of human/manual involvement solely on account signup would be a major security improvement.

It would be exhausting to have to audit your entire dependency tree like your life depended on it just to do the most mundane of things.

akerl_•43m ago
This isn’t about perfect vs good.

The thing you’re suggesting is outright not possible given the staffing that the Python maintainers have.

woodruffw•15m ago
Murky security model for domain validation aside, how does that ensure the honesty of the uploaded package?

(So much of supply chain security is people combining these two things, when we want both as separate properties: I both want to know a package's identity, and I want to know that I should trust it. Knowing that I downloaded a package from `literallysatan.com` without that I should trust `literallysatan.com` isn't good enough!)

zahlman•2h ago
PyPI's human resources are extremely strained. (The technical side also only exists thanks to Fastly's generosity.)
Sohcahtoa82•1h ago
Because that would easily get DoS'd.

Any time you introduce humans manually reviewing things, the attackers win instantly by just spamming it with garbage.

Tiberium•2h ago
I'm really not following -- why does the ban specifically focus on a single domain instead of attempting to solve the core issue? Do the maintainers not know that accounts for any big email provider (gmail, outlook, you name it) can be bought or created for very, very cheap. Which is obviously what the attackers will now do after this ban.

The blog post references [0] which makes it seem like the maintainers do, in fact, just ban any email providers that attackers use instead of trying to solve the issue.

[0] https://blog.pypi.org/posts/2024-06-16-prohibiting-msn-email...

snickerdoodle12•2h ago
What is the core issue and how would you solve it?
WmWsjA6B29B4nfk•2h ago
It’s funny they are talking about low hundreds of emails. This is what a single properly instructed human can create with any provider in a few hours, no bots needed.
bobbiechen•2h ago
Agreed, I thought it was going to be something automated, but 250 accounts in 7 hours seems pretty manual. That does make it harder to stop.

* 2025-06-09 first user account created, verified, 2FA set up, API Token provisioned

* 2025-06-11 46 more user accounts created over the course of 3 hours

* 2025-06-24 207 more user accounts created over the course of 4 hours

I do run https://bademails.org , powered by the same disposable-email-domains project, and I'll be the first to say that it only cuts out the laziest of attempts. Anyone even slightly serious has cheap alternatives (25 to 100+ accounts for $1 on popular email hosts).

ajsnigrutin•2h ago
Yep, and if a human is doing that, it's easy to switch over to a different email provider, until that gets banned too, then another, until you can't do anything without a gmail address anymore.
klntsky•2h ago
Google accounts are $0.50 on hstock. It's impossible to stop spam
ynbl_•1h ago
and mail.ru is not even a real internet service:

> Please enter the phone number you'll use to sign in to Mail instead of a password. This is more secure.

joecool1029•1h ago
That disposable-email-domain project is a good one. Over 10 years ago I did a dumb thing and pointed some of my domains MX's to Mailinator before I used them for real email with Fastmail and now the domains are flagged all over the place as disposable even though they haven't been used that way in ages.

This project has an allowlist you can submit a PR to so it doesn't get sucked back in every time people submit outdated lists of free email provider domains.

I've sent dozens of PR's to de-list my domains on various projects across Github and it's like fighting the sea, but the groups making opensource software to use these lists are at least very apologetic and merge the PR's quickly.

However, the biggest ASSHOLES are Riot Games. I've reached out to and they will not ban new user registrations on my domains. I eventually just had to block all the new account registration emails for League of Legends I was getting in my catch-all. The maintainer of the tool people were using to make new accounts was very responsive and apologetic (quickly merged my PR) but it doesn't stop people who used the old versions of it from continuing.