They don't want simple and easy to read code, then want to seem smart.
And it is a bit magic, and then when you need something a bit odd, it suddenly becomes fiddly to get working.
An example is when you need a delayed job server to have the user context of different users depending who triggered the job
They're pretty good in 95% of cases when you understand them, but a bit confusing magic when you don't.
I feel this is just a facet of the same confusion that leads to creating beautiful declarative systems, which end up being used purely imperatively because it's the only way to use them to do something useful in the real world; or, the "config file format lifecycle" phenomenon, where config files naturally tend to become ugly, half-assed Turing-complete programming languages.
People design systems too simple and constrained for the job, then notice too late and have to hack around it, and then you get stuff like this.
For the standard web page lifecycle it's fine, but for instances like this it really does become fiddly.
But often it's possible, but often a ideological stance the framework team have taken that leads to a poor documentation issue.
The asp.net core team have some weird hills they die on, and some incredibly poor designs that stem from an over adherence to trendy patterns. It often feels they don't understand why those patterns exists.
This results in them hardly documentating how to use the DI outside of their 'ideal' flow.
They also try and push devs to use DI for injecting config. Which no other language does and is just unnecessarily complicated. And it's ended up with a system no-one really understands while the old System.Configuration, while clunky, at least automatically rebooted the app when you edited the config. Which is the 95% use case most devs would want.
https://news.ycombinator.com/item?id=44087215
TL;DR: the goal of enterprise frameworks isn't to make Perfect Software Framework or to make code beautiful, devoid of bloat, or even easy. Their goal is to make programming consistent and predictable, to make programmers exchangeable. It's to allow an average developer to churn around working results at a predictable pace, as long as the project is just standard stuff, and they don't bring their own opinions into it. Large businesses want things this way, because that's how they think about everything (see also: Seeing Like a State).
Of course, this doesn't mean the framework authors succeed at that goal either :). Some decisions are plain stupid. But less than one would think.
And "dynamic scope" is also a lofty-sounding term, on par with "dependency injection".
He a tremendous source of knowledge in that regard.
https://blog.ploeh.dk/2017/01/27/from-dependency-injection-t...
My philosophy of programming is "there's no right way to do it, do what works for you and makes you happy and if someone tell you you're doing it wrong, pay no attention - they're a bully and a fool".
Or like a pilot doesn't get a job because they flew a slightly older Airbus model and need to do some sim time.
Writing tests for nearly all my code, in particular, is these days the only way I roll - and as for TDD (i.e. write the test and let it fail first, then write the actual code and make the test pass), I do it quite often, and I guarantee you that - contrary to your opinion - it makes coding a whole new kind of fun and creative. Dependency injection I still consider myself less of a ninja at, but I've done it (and seen it done) enough times now that I get it and I see the value in it.
I think it's a bit stupid for an employer to say "we'd never have hired you if we knew you had no experience in X" (sure, this doesn't apply to all skills, but I'd say it applies to quite a few). If you're worth hiring, then you'll pick up X within a few months on the job. I'm grateful to several past employers of mine, for showing me the ropes of TDD and DI (among many other things).
Anyway, I'm not saying that the above things are "the (only) right way to do it", and please don't take my above ramblings as making a judgement on your coding prowess. I agree, do what works for you. I'm just saying that there's always more to learn, and that you should always strive to be open-minded to new skills and new approaches.
It's too complicated of a term for what it is because we generally don't say we inject arguments into a function when we call a function.
But maybe you mean patterns building on that, e.g. repository/adapter patterns.
The value of tests doesn't generally come from when you first write them. It comes from when you're working on a codebase written by someone else (who has long ago quit, or been fired).
It helps me understand and be able to refactor their code. It gives me the confidence to routinely ship something to production and know that it won't break.
I'm going to guess that you've most likely used dependency injection without even thinking about it. It's one of those things you naturally do because it makes sense, even if you don't know it has an actual name, frameworks, and all that other stuff that often only makes it more confusing.
I would say you are a bad programmer for implying that DI is useless though.
I agree that it is a cancer. Monocultures are rarely a good idea. And I strongly prefer explicit dependencies and/or compile time magic over runtime magic. But it is "convenient" and very much en vogue.
DI seems like some sort of job security by obscurity.
I remember trying to effectively reverse-engineer a codebase (code available but nobody knew how it worked) with a lot of DI and it was fairly painful.
Maybe it was possible back then and I just didn't know how ¯\_(ツ)_/¯
In a Spring application there are a lot of (effective) singletons, the "which implementation of the variable that implements Foo is it" becomes also less of a question.
In any case, we use Spring on a daily basis, and what you describe is not a real issue for us.
Also, what I think is also important to differentiate between: dependency injection, and programming against interfaces.
Interfaces are good, and there was a while where infant DI and mocking frameworks didn't work without them, so that folks created an interface for every class and only ever used the interface in the dependent classes. But the need for interfaces has been heavily misunderstood and overstated. Most dependencies can just be classes, and that means you can in fact click right into the implementation, not because the IDE understands DI, but because it understands the language (Java).
Don't hate DI for the gotten-out-of-control "programming against interfaces".
It's technically a flaw of using generic interfaces, rather than DI. But the latter basically always implies the former.
If there are multiple implementations it gives a list to navigate to. If there’s 1 it goes straight to it. Don’t know about IntelliJ but rider and vs do this. And if the solution is indexed this is fast.
Edit: upon rereading I realize your point was about reading code, not writing it, so I guess that could be a different use case...
There's nothing wrong with using an IDE most of the time, but building dependence on one such that you can't do anything without it is absolute folly.
The issues are still there. You can't just "go to definition" of the class being injected into yours, even if there is only one. You get the Interface you expect (because hey you have to depend on Interfaces because of something something unit-testing), and then see what implements that interface. And no, it will not just point to your single implementation, it'll find the test implementation too.
But where that "thing" gets instantiated is still a mystery and depends on config-file configured life-cycles, the bootstrapping of your application, whether the dependency gets loaded from a DLL, etc. It's black-box elephants all the way to the start of your application. And all that you see at the start is something vague like: var myApp = MyDIFramework.getInstance(MyAppClass); Your constructors, and where they get called from is in a never-ending abyss of thick and unreadable framework code that is miles away from your actual app. Sacrificed at the alter of job-creation, unit-testing and evangelist's talk-resume padding.
Yes, I can? At least Rider can jump to the only implementation, no questions asked.
> And no, it will not just point to your single implementation, it'll find the test implementation too.
It will, but is it a problem to click the correct class from a list of two options?
var calculator = Substitute.For<ICalculator>();This situation isn't unique when using DI (although admittedly DI does make using interfaces more common). However, that's what the "go to implementation" menu option is for.
For a console app, you're right that a DI framework adds a lot of complexity. But for a web app, you've already got all that framework code managing controller construction. If you've got the black box anyways, might as well embrace it.
Yes, the comments about "$25 name for a 5c concept" ring true when you're looking at a toy example with constructor(logger) { .. }.
Then you look at an enterprise app with 10 years of history, with tests requiring 30 mocks, using a custom DI framework that only 2 people understand, with multiple versions of the same service, and it feels like you've entered another world where it's straight up impossible to debug code.
> then you're probably leaving some workflow efficiency on the table
Typical HN to assume what the best workflow efficiency is, and that it mostly hinges on a specific technology usage :)
Imagine I'd claim that since you're not using nrepl and a repl connected to your editor, you're leaving some workflow efficiency on the table, even though I know nothing about your environment, context or even what language you program in usually.
Usually on the third time someone recommends {X} I would have looked into it and formed my own conclusions with first hand experience.
Why should we do it like this, why is the D in SOLID so important when it causes pain?
This is lack of experience showing.
DI is absolutely not needed for small projects, but once you start building out larger projects the reason quickly becomes apparent.
Containers...
- Create proxies wrapping the objects, if you don't centralise construction management it becomes difficult.
- Cross cutting concerns will be missed and need to be wired everywhere manually.
- Manage objects life cycles, not just construction
It also ensures you code to the interface. Concrete classes are bad, just watch what happens when a team mate decides they want to change your implementation to suit their own use cases, rather than a new implementation of the interface. Multiply that by 10x when in a stack.
Once you realise the DI pain is for managing this (and not just allowing you to swap implementation, as is often the the poster boy), automating areas prone to manual bugs, and enforcing good practices, the reasons for using it should hopefully be obvious. :)
Most dependency injection that I see in the wild completely misses this distinction. Inversion can promote good engineering practices, injection can be used to help with the inversion, but you don’t need to use it.
Liskov substitution for example is an overkill way of saying don't create an implementation that throws an UnsupportedOperationException, instead break the interfaces up (Interface Segregation "I" in SOLID) and use the interface you need.
Quoting the theory to junior devs instead just makes their eyes roll :D
ISP isn’t about avoiding UnsupportedOperationException as well, it’s about reducing dependencies.
It's also actively unhelpful for large projects which have relatively more simple logic but complex interfaces with other services (usually databases).
DI multiplies the amount of code you need - a high cost for which there must be a benefit. It only pays off in proportion to the ratio of complexity of domain logic to integration logic.
Once you have have enough experience on a variety of different projects you should hopefully start to pick up on the trade offs inherent in using it to see when it is a good idea and when it has a net negative cost.
Almost nothing written using Go uses an IoC container (which is what I assume you're meaning by DI here). It's hard to argue that "larger projects" cannot or indeed are not built using Go, so your argument is simply invalid.
If you are on node/ts look at effect-ts.
That's the whole point. Depdendency Inversions allows you to write part of the code in separation, without worrying about all the dependencies of each component you create and what creates what where.
If your code is small enough that you can keep all the dependencies in your head at the same time and it doesn't slow you down much to pass them all around all the time - DI isn't worth it.
If it becomes an issue - DI starts to shine. There are other solutions as well, obviously (mostly in the form of Object-Orientified global variables - for example you keep everything in GameWorld object and pass it everywhere).
You are confusing DI principles and using a "DI framework". Re-read the article.
This can be done manually, but becomes a chore super fast - and will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing. I find it funny that people rally against those Frameworks so often.
To make things more concrete, let's say you have a method that gets the current date, and has some logic there (for example, it checks if today is EOM to do something). In Java, you could do `Instance.now()` to do this.
This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc. With DI you can instead inject an `InstantSource` to your code, and on testing you can just mock the dependency to have a predictable date on each test.
Dependency injection is the inversion of control pattern at the heart, which is something like oxygen to a Java dev.
In other languages, these issues are solved differently. From my perspective as someone whoes day job has been roughly 60+% Java for over 10 yrs now... I think I agree with the central message of the article. Unless you're currently in Java world, you're probably better off without it.
These patterns work and will on paper reduce complexity - but if comes at the cost of a massively increased mental overhead if you actually need to address a bug that touches more then a miniscule amount of code.
/Edit: and I'd like to mention that the article actually only dislikes the frameworks, not the pattern itself
I know in .net, it was only really the switch to .net core where it became an integral part of the frameworks. In MVC 5 you had to add a third party DI container.
So how can it have been designed for it from the ground up?
In fact, if you're saying 10 years, that's roughly when DI became popular.
You're wrong about other languages not needing it, yes statically typed languagess need it for unit testing, but you don't seem to realize that from a practical perspective DI solves a lot of the problems around request lifetimes too. And from an architectural context it solves a lot of the problem of how to stop bad developers overly coupling their services.
Before DI people often used static methods, so you'd have a real mess of heavily interdependent services. It can still happen now but.its nowhere near as bad as the mess of programming in the 2000s.
DI helped reduce coupling and spaghetti code.
DI also forces you to 'declare' your dependencies, so it's easy to see when a class has got out of control.
Edit: I could keep on adding, but one final thing. Java and .Net are actually quite cumbersome to use DI, and Go is actually easier. Because Go has implicit interfaces, but older languages don't and it would really help reduce boiler plate DI code.
A lot of interfaces in Java/C# only exist to allow DI tow work, and are otherwise a pointless waste of time/code.
It’s not correct that Java was designed for it, unless you want to call class loading dependency injection. It’s merely that Java’s reflection mechanism happened to enable DI frameworks. The earlier Java concept was service locaters (also discussed in the article linked above).
> will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
That for example is just not true. You add a new parameter to inject and it breaks the injection points? Yeah that’s expected, and suitable. I want to know where my changes have any impact, that’s the point of typing things.
A lot of things deemed "more maintainable" really aren’t. Never has a DI framework made anything simpler.
Perhaps you never worked in a sufficiently large codebase?
It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used. Been there, done that, and by God I wished I had Dagger or Spring or anything really to lend me a hand.
DI frameworks are a tool like any other. When properly used in the correct context they can be helpful.
You don't have to update the injection points, because the injection points don't know the concrete details of what's being injected. That's literally the whole point of dependency injection.
Edited to add: Say you have a class A, and this is a dependency of classes B, C, etc. Using dependency injection, classes B and C are passed instances of A, they don't construct it themselves. So if you add a dependency to A, you have to change the place that constructs A, of course, but you don't have to change B and C, because they have nothing to do with the construction of A.
The problem when you don't do it this way is when you depend on order of initialization in a way you are not aware of until it breaks, and it breaks in all kinds of interesting ways.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
/sarcasm (in case anyone doubted it).
It's not just a fancy name. I'd argue it's a confusing name. The "$25 name for a 5c concept" quote is brilliant. The name makes it sound like it's some super complicated thing that is difficult to learn, which makes it harder to understand. I would say "dynamic programming" suffers the same problem. Maybe "monads".
How about we rename it? "Generic dependencies" or "Non hard-coded dependencies" or even "dependency parameters"?
I like “dependency parameters”. Dependencies in that sense are usually what is called service objects, so “service parameters” might be even clearer.
[0] https://www.martinfowler.com/articles/injection.html#FormsOf...
And yes, even though some languages/frameworks allow deps to be "passed in" via mechanisms that aren't technically parameters (like member variables that are just somehow magically initialised due to annotations), doing that only obfuscates control flow and should be avoided IMHO.
Why not call it "dependency resolution"? The only problem frameworks solve is to connect a provider of thing X with all users of thing X, possibly repeating this process until all dependencies have been resolved. It also makes it more clear that this process is not about initialization and lifecycling, it is only about connecting interfaces to implementations and instantiating them.
Edit: The only DI framework I have used and actually kind of like is Dagger 2, which resolves all dependencies at compile time. It also allows a style of use where implementation code does not have to know about the framework at all - all dependency resolution modules can be written separately.
All other runtime DI frameworks I have used I have hated with a passion because they add so much cognitive overhead by imposing complex lifecycle constraints. Your objects are not initialized when the constructor has finished running because you have to somehow wait for the DI framework to diddle the bits, and good luck debugging when this doesn't work as expected.
For instance, some code which prints stuff, but doesn't take the output stream as a parameter, instead hard-coding to a standard output stream.
That leaves fewer options for testing also, as a secondary problem.
Since what we want to parametrize are objects held by composition, maybe "composition parametrization".
"To promote better flexibility and code reuse and to make testing easier, parametrize object composition."
If you lift that assumption, the problem kind of goes away. James Shore calls this Parameterless Instantiation: https://www.jamesshore.com/v2/projects/nullables/testing-wit...
It means that in most cases, you just call a static factory method like create() rather than the constructor. This method will default to using Instance.now() but also gives you a way to provide your own now() for tests (or other non-standard situations).
At the top of the call stack, you call App.create() and boom - you have a whole tree of dependencies.
class Foo {
private nowFn: () => Date;
constructor(nowFn: () => Date) {
this.nowFn = nowFn;
}
doSomething() {
console.log(this.nowFn());
}
static create(opts: { nowFn?: () => Date } = {}) {
return new Foo(opts.nowFn ?? (() => new Date()));
}
}Dependency injection boils down the question of whether or not you can dynamically change a dependency at runtime.
James explains this a lot better than I can: https://www.jamesshore.com/v2/blog/2023/the-problem-with-dep...
Mark Seemann calls it the Constrained Construction anti-pattern: https://blog.ploeh.dk/2011/04/27/Providerisnotapattern/#4c7b...
You can use the type system to your advantage. Cut a new type and inject a ReadOnlyDataSource or a SecondDatabaseDataSource or whatnot. Figure out what should only have one instance in your app, wrap a type around it, put it in the singleton scope, and inject that.
This has the advantage that you don't need an extra framework/dependency to handle DI, and it means that dependencies are usually much easier to trace (because you've literally got all the code in your project, no metaprogramming or reflection required). There are limits to this style of DI, but in practice I've not reached those limits yet, and I suspect if you do reach those limits, your DI is just too complicated in the first place.
* Your framework gets so complicated that someone rips it out and replaces it with one of the standard frameworks.
* Your framework gets so complicated that it turns into a popular project in its own right.
* Your app dies and your custom framework dies with it.
The third one is the most common.
At some point this stops working, I agree — this isn't necessarily an infinitely scalable solution. At that point, switching to (1) is usually a fairly simple endeavour because you're already using DI, you just need to wire it together differently. But I've been surprised at the number of cases where just going without a framework altogether has been completely sufficient, and has been successful for far longer than I'd have originally expected.
The answer is scope. Singletons exist explicitly for the purpose of violating scoping for the convenience of not having to thread that dependency through constructors.
https://testing.googleblog.com/2008/08/root-cause-of-singlet...
there is no right answer of course. Time should be a global so that all timers/clocks advance in lock step. I hav a complex fake time system that allows my tests to advance minutes at a time without waiting on the wall clock. (If you deal with relativity this may not work - for everyone else I encourage it)
The very purpose of DI is to allow using a different implementation of the same thing in the first place. You shouldn’t need a framework to achieve that. And my personal experience happens to match that.
If you can't monkey-patch the getDate function with a mock in a testing context because your language won't allow it, that's a language smell, not a pattern smell.
Not so fast. Constraints like "no monkeypatching allowed" are part of what make it possible to reason about code at an abstract level, i.e., without having to understand in detail every control path that could possibly have run before. Allowing monkeypatching at the language level means discarding that useful reasoning tool.
I'm not saying that "no monkeypatching allowed" is always ideal, but it is a tradeoff.
(Consider why so many languages have something like a "const" modifier, which purely restricts what you can do with the object in question. The restriction reduces what you can do with it, but increases what you know about it.)
But for unit testing? Go nuts.
(If you have linter rules or static analysis that can detect monkeypatching and runs on every commit (or push to main, or whatever), you're good.)
(Disclaimer: have not used PowerMockito in ages, am not confident it works with the new module system.)
Of course you can do it in Java. But it is widely considered poor practice, for good reason, and is generally avoided.
It is useful for more than testing (although, depending on the kind of tests being made, it might not always be useful for all kind of tests). It also allows you to avoid a program having too many dependencies that you might not need (although this can also cause a problem, it could perhaps be avoided by providing optional dependencies, and macros (or whatever other facility is appropriate in the programming language you are using) to use them), and allows more easily for the caller to specify customized methods for some things (which is useful in many programs, e.g. if you want customized X.509 certificate validation in a program, or customized handling of displaying/requesting/manipulation of text, or use of a virtual file system).
In a C program, you can use a FILE object for I/O. Instead of using fopen or standard I/O, a library could accept a FILE object that you had previously provided, which might or might not be an actual file, so it does not need to deal with file names.
> This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc.
I think that better operating system design with capability-based security would help with this and other problems, although having dependency injection can also be helpful for other purposes too.
Capability-based security is useful for many things. Not only it helps with testing, but also helps to work around a problem if a program does not work properly on a leap year, you can tell that specific program that the current date is actually a different date, and it can also be used for security, etc. (With my idea, it also allows a program to execute in a deterministic way, which also helps with testing and other things, including resist fingerprinting.)
Using an IoC container is endemic in the Java ecosystem, but unheard of in the Go ecosystem - and it's not hard to see which of them favours simplicity!
Then you are doing dependency injection, just replacing the benefits of a proper framework by instantiating everything at once the root of your app.
Whatever floats your boat, I guess. Thankfully I don't need to work on your code.
I think we can both agree that it’s good we don’t need to work with each other though.
That you think that a DI framework obscures anything is just a developer smell.
Having worked with multiple of those, it was always pretty clear what was being instantiated and how.
It is interesting to note that these frameworks are effectively non-existent in Go, Rust, Python - even Typescript, and no one complains you can’t build “real things” in any of those languages.
I myself am on the dislike camp, I have found that mocking modules (like you can with NodeJS testing frameworks) for tests gives most of the benefits with way less development hell. However you do need to be careful with the module boundaries (basically structure them as you would with DI) otherwise you can end up with a very messy testing system.
The value of DI is also directly proportional to the size of the service being tested, DI went into decline as things became more micro-servicy with network-enforced module boundaries. People are just mocking external services in these kind of codebases instead of internal modules, which makes the boundaries easier.
I can see strict DI still being useful in large monolith codebases worked by a lot of hands, if only to force people to structure their modules properly.
In IntelliJ, with the Spring Framework, you can have thorough tooling: You can inspect beans, their dependencies, you even get a visual bean graph, you can write mocks and test dependencies and don't even need interfaces anymore and if a dependency is missing, you will receive an IDE warning before runtime.
I do not understand why people are so excited about a language and its frameworks where the wheel is still actively being reinvented in a worse way.
I've had my fair share of Java and Spring Boot projects and it breaks in all sorts of stupid ways there, even things like the same exact code and runtime environment working in a container that's built locally, but not working when the "same" container is built on a CI server: https://blog.kronis.dev/blog/it-works-on-my-docker
Literally a case where Spring Boot DI just throws a hissy fit that you cannot easily track down, where I had to mess around with the @Lazy annotation (despite the configuration to permit that being explicitly turned on too) in over 100 places to resolve the issue, plus then when you try to inject a list of all classes that implement an interface with @Lazy it doesn't seem like their order is guaranteed either so your DefaultValidator needs to be tacked on to that list manually at the end.
Sorry about the Java/Spring rant.
It very much feels like the proper place for most DI is at compile time (like Dagger does for Java, seems closer to wire) not at runtime, or just keep IoC without a DI framework/library and having your code look a bit more like this:
@Override
public void run(final BackendConfiguration configuration,
final Environment environment) throws IOException, TimeoutException {
// Initialize our data stores
mariaDBManager = new MariaDBManager(configuration, environment);
redisManager = new RedisManager(configuration);
rabbitMQManager = new RabbitMQManager(configuration);
// Initialize our generic services
keyValueService = new KeyValueService(redisManager);
sessionService = new SessionService(keyValueService, configuration);
queueService = new QueueService(rabbitMQManager);
// Initialize services needed by resources
accountService = new AccountService(mariaDBManager);
accountBalanceService = new AccountBalanceService(mariaDBManager);
auctionService = new AuctionService(mariaDBManager);
auctionLotService = new AuctionLotService(mariaDBManager);
auctionLotBidService = new AuctionLotBidService(mariaDBManager);
// Initialize background processes based on feature configuration
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isProcessBids()) {
bidListener = new BidListener(queueService, auctionLotBidService, auctionLotService, accountBalanceService);
try {
bidListener.start();
logger.info("BidListener started");
} catch (IOException e) {
logger.error("Error starting BidListener: {}", e.getMessage(), e);
}
}
// Register resources based on feature configuration
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isAccounts()) {
environment.jersey().register(new AccountResource(accountService, accountBalanceService, sessionService, configuration));
}
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isBids()) {
environment.jersey().register(new AuctionResource(
auctionService, auctionLotService, auctionLotBidService, sessionService, queueService));
}
...
}
Just a snippet of code from a Java Dropwizard example project, not all of its contents either, but should show that it's nothing impossibly difficult. Same principles apply to other languages and tech stacks, plus the above is unequivocally easier to put a breakpoint in and debug, vs some dynamic annotation or convention based mess.Overall, I agree with the article, even across multiple languages.
You don't see many articles written like that because it kinda would be obvious that the author hasn't bothered to understand the approach that he is critizing.
Yet when it comes to OO concepts people from "superior" platforms like Go or the FP crowd just cannot let go of airing their ignorance.
Just leave OO alone unless you are genuinely interested in the approach.
My favourite little hack for simple framework-less DI in Python these days looks something like this:
# The code that we want to call
def do_foo(sleep_func = None):
_sleep_func = sleep_func if sleep_func is not None else time.sleep
for _ in range(10):
_sleep_func(1)
# Calling it in non-test code
# (we want it to actually take 10 seconds to run)
def main():
do_foo()
# Calling it in test code
# (we want it to take mere milliseconds to run, but nevertheless we
# want to test that it sleeps 10 times!)
def test_do_foo():
mock_sleep_func = MagicMock()
do_foo(sleep_func=mock_sleep_func)
assert mock_sleep_func.call_count == 10 def do_foo(sleep=time.sleep):
for _ in range(10):
sleep(1)However, in Python I prefer to use true DI. I mostly like Injector[0] because it’s lightweight and more like a set of primitives than an actual framework. Very easy to build functionality on top of and reuse - I have one set of modules that can be loaded for an API server, CLI, integration tests, offline workers, etc.
That said, I have a few problems with it - 2 features which I feel are bare minimum required, and one that isn’t there but could be powerful. You can’t provide async dependencies natively, which is not usable in 2025 - and it’s keyed purely on type, so if you want to return different string instances with some additional key you need a wrapper type.
Between these problems and missing features (Pydantic-like eval time validation of wiring graphs) I really want to write my own library.
However, as a testament to the flexibility of Injector, I could implement all 3 of these things as a layer on top without modifying its code directly.
But then some classes which use other classes hard code those classes in their constructor. They then work with those specific hard-coded classes. It's like if someone crazy-glued some of our Lego blocks together.
We recognize this problem and allow the sister objects to be configurable.
Then some opinionated nubmnut comes along and says, "hey, we should call this simple correction 'dependency injection'". And somehow, everyone listens.
All of the complexity boils down to the fact that you have to remember to register your services before you can use them. If you forget, the stack trace is pretty hard to debug. Given that you're already deep into FX, it becomes pretty natural to remember this.
That said, I'd say that if you don't care about unit tests or you are good enough about always writing code that already takes things in constructors, you probably don't need this.
Java and Dagger 2 have solved the DI years ago. Fast, compile time safe and easy to use.
Some languages seem to naturally invite people to do the wrong thing. Javascript is a great example of this that seems to bring out the worst in people. Many of the people wielding that aren't very experienced and when they routinely initialize random crap in the middle of their business logic executed asynchronously via some event as a side effect of a butterfly stirring its wings on the other side of the planet, you end up with the typical flaky untestable, and unholy mess that is the typical Javascript code base. Exaggerating a bit here of course but I've seen some epically bad code and much of that was junior Javascript developers being more than a little bit clueless on this front.
Doing DI isn't that hard. Just don't initialize stuff in places that do useful things. No exceptions. Unfortunately, it's hard to fix in a code base that violates that rule. Because you first have to untangle the whole spaghetti ball before you can begin beating some sense into it. The bigger the code base, the more likely it is that it's just easier to just burn it down to the ground and starting from scratch. Do it right and your code might still be actively maintained a decade or more in the future. Do it wrong and your code will probably be unceremoniously deleted by the next person that inherits your mess.
And yes, that is good coding practice. That kind of was my point.
I don't disagree, just that when talking about any given subject having an understanding of how the audience already thinks about that subject is somewhat important.
The bigger better goal here is probably to try and get folks to internally separate DI as a pattern from DI/IOC frameworks.
- I'm willing to rewrite some code if we decide that a core library needs to get swapped out
- I'm always using languages that allow monkey-patching, so I'm not struggling to test my code because, for example, it's hard to substitute a mock implementation for `Date.now()`.
DI makes more sense if you're not in that position. But in that position, DI adds "you need these three files in your brain at the same time to really understand what's going on" complexity that I seek to avoid.
(Also, DI overlaps with generics, and to the extent that you can make things generic and it makes sense to do so, you should).
The real con of Dependency Injection = {Developer Egos + ( Boredom | Lax Deadlines ) Lack of senior oversight} which inevitably yields needless overengineering.
The whole point of DI is that when you can't just write that straight-line implementation it becomes easier, not harder. What if I've got 20 different handlers, each of which need 5-10 dependencies out of 30+, and which run in 4 different environments, using 2 different implementations. Now I've got hundreds of conditionals all of which need to line up perfectly across my codebase (and which also don't get compile time checks for branch coverage).
I work with DI at work and it's pretty much a necessity. I work without it on a medium sized hobby project at home and it's probably one of the things I'd most like to add there.
To be fair, the numbers you h the row out sound like a framework becomes valuable, but most places I’ve seen DI frameworks, they could be replaced with manual DI and it would be much simpler.
DI frameworks are complicated, but they're a constant level of complicated, they don't get more complicated as the codebase grows. Not using a DI framework is simple at the beginning, but it grows, possibly exponentially, and at some point crosses the line of constant complexity from a DI framework.
Finding where those lines intersect is just good engineering. Ignoring the fact that they do intersect is not.
How so?
With a DI framework of some kind, those conditionals would likely not exist, instead you'd be able to specify all the options, and the DI framework takes over stitching them together and finding the dependencies between them.
Listen to your code. If it’s hard to write all that, it probably means you have too many dependencies. Making it easier to jam more dependencies in your modules is just going to make things worse. “But I need all those…” Do you really? They rarely ever are all necessary, in my experience. Usually there’s a better way to untangle the dependency tree.
You just dispense with all the frippery that hides the fact that you are depending on global variables if you really want the Guice/Spring Boot experience.
The C++ code was much, much easier to trace by hand. It was easier to test. It started much much faster, speeding iteration. Meanwhile the Java service was a PITA to trace, took 30+ seconds to boot, and couldn't be AOT compiled to address that because of all the reflection.
I've done DI in Java (Guice), Python (pytest), Go (internal), and a little in C++ (internal). The best was Pytest, very simple but obviously quite specific. Guice is a bit of a hassle but actually fine when you get the hang of it, I found it very productive. The internal Go framework I've used is ok but limited by Go, and I don't have enough experience to draw conclusions from the C++ framework I've used.
In practice I noticed I'm ok with direct dependency as long as I can change the implementation with a compile time variable. For the tests, I use an alternative implementation, for development another. I don't swap an implementation for another within the same code. It is an option, but it happens so rarely that it seems absurd optimizing for it.
So, I like dependency injection as a concept, but I avoid it to reduce complexity. The advantage is that you can get by with a lot more "global" code. In Go this is particularly nice since DI is really nasty (struct methods have various limitations)
- the 'autopilot GPS' problem: Colleagues who basically have no idea how things fit together, because DI connects the dots for them. So they end up with either a vague or no mental model of that happens below the surface.
- the same, but related to scope and costs: Costs: Because they don't touch what is 'built behind the scenes', they get no sense of how costly it is ('every time you do use that thing, it instantiates and throws away a million things'). Scope: Often business logic dictates that the construction hierarchy consists of finely tuned parts: You don't just need an instance of 'Foo', you need a Foo instance that originates from a specific / certain request. And if you then use two Bar's together, where Bar 1 is tied to Foo 1 but Bar 2 is tied to Foo 2, you will get strange spurious errors (think, for example, ORMS and database transactions - the two Foos or Bars may relate to different connections, transactions or cursors.)
One antipattern I have seen (which may actually be an argument FOR DI..), is the 'query everything' service, which incorporates 117 other sub-services. And some of the junior developers just love that service, because "then I can query everything, from a single place!" (yes.. but you just connected to 4 databases with 7 connections, and you are only trying to select a single row from one of them. And again, code with the everything-service becomes quite untestable).
The best part of the article is its advice for triggering the broken dependencies at compile-time, I really hate when I have to go through complicated flow #127 to learn that a dependency is broken.
But the way DI is usually implemented is with this bag of global variables which you just reach in and grab the first object of the desired type. I call this the little jack horner pattern. Stick in your thumb and pull out a plum. That, is stupid. You've reinvented global variables, but actually worse. Congratulations.
sureglymop•8mo ago
epolanski•8mo ago