Personally, I think the "network sharing" software bundled with apps should fall into the category of potentially unwanted applications along with adware and spyware. All of the above "tag along" with something the user DID want to install, and quietly misuse the user's resources. Proxies like this definitely have an impact for metered/slow connections - I'm tempted to start Wireshark'ing my devices now to look for suspicious activity.
There should be a public repository of apps known to have these shady behaviours. Having done some light web scraping for archival/automation before, it's a pity that it'll become collateral damage in the anti-AI-botfarm fight.
Is the premise that users should not be allowed to use vpns in order to participate in ecommerce?
[1] https://reports.exodus-privacy.eu.org/en/trackers/ [2] https://f-droid.org/packages/com.aurora.store/
I suspect that this goes for many different SDKs. Personally, I am really, really sick of hearing "That's a solved problem!", whenever I mention that I tend to "roll my own," as opposed to including some dependency, recommended by some jargon-addled dependency addict.
Bad actors love the dependency addiction of modern developers, and have learned to set some pretty clever traps.
The "network sharing" behavior in these SDKs is the sole purpose of the SDK. It isn't being included as a surprise along with some other desirable behavior. What needs to stop is developers including these SDKs as a secondary revenue source in free or ad-supported apps.
Doubt it. This is just one -of many- carrots that are used to entice developers to include dodgy software into their apps.
The problem is a lot bigger than these libraries. It's an endemic cultural issue. Much more difficult to quantify or fix.
Brings a new meaning to dependency injection.
My personal beef is that most of the time it acts like hidden global dependencies, and the configuration of those dependencies, along with their lifetimes, becomes harder to understand by not being traceable in the source code.
To me it‘s rather anti-functional. Normally, when you instantiate a class, the resulting object’s behavior only depends on the constructor arguments you pass it (= the behavior is purely a function of the arguments). With dependency injection, the object’s behavior may depend on some hidden configuration, and not even inspecting the class’ source code will be able to tell you the source of that bevavior, because there’s only an @Inject annotation without any further information.
Conversely, when you modify the configuration of which implementation gets injected for which interface type, you potentially modify the behavior of many places in the code (including, potentially, the behavior of dependencies your project may have), without having passed that code any arguments to that effect. A function executing that code suddenly behaves differently, without any indication of that difference at the call site, or traceable from the call site. That’s the opposite of the functional paradigm.
It sounds like you have a gripe with a particular DI framework and not the idea of Dependency Injection. Because
> Normally, when you instantiate a class, the resulting object’s behavior only depends on the constructor arguments you pass it (= the behavior is purely a function of the arguments)
With Dependency Injection this is generally still true, even more so than normal because you're making the constructor's dependencies explicit in the arguments. If you have a class CriticalErrorLogger(), you can't directly tell where it logs to, is it using a flat file or stdout or a network logger? If you instead have a class CriticalErrorLogger(logger *io.writer), then when you create it you know exactly what it's using to log because you had to instantiate it and pass it in.
Or like Kortilla said, instead of passing in a class or struct you can pass in a function, so using the same example, something like CriticalErrorLogger(fn write)
It's equivalent to partial application.
An uninstantiated class that follows the dependency injection pattern is equivalent to a family of functions with N+Mk arguments, where Mk is the number of parameters in method k.
Upon instantiation by passing constructor arguments, you've created a family of functions each with a distinct sets of Mk parameters, and N arguments in common.
I'm talking more specifically about Aspect Oriented Programming though and DI containers in OOP, which seemed pretty clever in theory, but have a lot of issues in reality.
I take no issues with currying in functional programming.
But nobody seems to do this diligence. It’s just “we are in a rush. we need X. dependency does X. let’s use X.” and that’s it!
Wrong question. “Are you paid to audit this code?” And “if you fail to audit this code, who’se problem is it?”
Have you ever worked anywhere that said "go ahead and slow down on delivering product features that drive business value so you can audit the code of your dependencies, that's fine, we'll wait"?
I haven't.
I imagine that e.g. Youtube would be happy to agree with this. Not that it would turn them against AI generally.
[Cloudflare](https://developers.cloudflare.com/cache/troubleshooting/alwa...) tags the internet archive as operating from 207.241.224.0/20 and 208.70.24.0/21 so disabling the bot-prevention framework on connections from there should be enough.
What good is all the app vetting and sandbox protection in iOS (dunno about Android) if it doesn't really protect me from those crappy apps...
If you treat platforms like they are all-powerful, then that's what they are likely to become...
Maybe it's less convenient and more expensive and onerous. Do good things require hard work? Or did we expect everyone to ignore incentives forever while the trillion-dollar hyperscalers fought for an open and noble internet and then wrapped it in affordable consumer products to our delight?
It reminds me of the post here a few weeks ago about how Netflix used to be good and "maybe I want a faster horse" - we want things to be built for us, easily, cheaply, conveniently, by companies, and we want those companies not to succumb to enshittification - but somehow when the companies just follow the game theory and turn everything into a TikToky neural-networks-maximizing-engagement-infinite-scroll-experience, it's their fault, and not ours for going with the easy path while hoping the corporations would not take the easy path.
We are working on an open‑source fraud prevention platform [1], and detecting fake users coming from residential proxies is one of its use cases.
Trying to understand your product, where is it intended to sit in a network? Is it a standalone tool that you use to identify these IPs and feed into something else for blockage or is it intended to be integrated into your existing site or is it supposed to proxy all your web traffic? The reason I ask is it has fairly heavyweight install requirements and Apache and PHP are kind of old school at this point, especially for new projects and companies. It's not what they would commonly be using for their site.
Thank you for your question. tirreno is a standalone app that needs to receive API events from your main web application. It can work perfectly with 512GB Postgres RAM or even lower, however, in most cases we're talking about millions of events that request resources.
It's much easier to write a stable application without dependencies based on mature technologies. tirreno is fairly 'boring software'.
Why jump to that conclusion?
If a scraper clearly advertises itself, follows robots.txt, and has reasonable backoff, it's not abusive. You can easily block such a scraper, but then you're encouraging stealth scrapers because they're still getting your data.
I'd block the scrapers that try to hide and waste compute, but deliberately allow those that don't. And maybe provide a sitemap and API (which besides being easier to scrape, can be faster to handle).
TINFOIL: Sometimes I always wondered if Azure or AWS used bots to push site traffic hits to generate money... they know you are hosted with them.. They have your info.. Send out bots to drive micro accumulation. Slow boil..
GCE is rare in my experience. Most bots I see are on AWS. The DDOS-adjacent hyper aggressive bots that try random URLs and scan for exploits tend to be on Azure or use VPNs.
AWS is bad when you report malicious traffic. Azure has been completely unresponsive and didn't react, even for C&C servers.
People are jumping to conclusions a bit fast over here, yes technically it's possible but this kind of behavior would be relatively easy to spot because the app would have to make direct connections to the website it wants to scrap.
Your calculator app for instance connecting to CNN.com ...
iOS have app privacy report where one can check what connections are made by app, how often, last one, etc.
Android by Google doesn't have such a useful feature of course, but you can run third party firewall like pcapdroid, which I recommend highly.
Macos (little snitch).
Windows (fort firewall).
Not everyone run these app obviously, only the most nerdy like myself but we're also the kind of people who would report on app using our device to make, what is in fact, a zombie or bot network.
I'm not saying it's necessarily false but imo it remains a theory until proven otherwise.
AKA "why do Cloudflare and Google make me fill out these CAPTCHAs all day"
I don't know why Play Protect/MS Defender/whatever Apple has for antivirus don't classify apps that embed such malware as such. It's ridiculous that this is allowed to go on when detection is so easy. I don't know a more obvious example of a trojan than an SDK library making a user's device part of a botnet.
api•2h ago
Anything incorporating anything like this is malware.
reconnecting•1h ago
In most cases they are used for conducting real financial crimes, but the police investigators are also aware that there is a very low chance that sophisticated fraud is committed directly from a residential IP address.