"Well, it's impolite, isn't it?"
The saddest one I saw was a team trying to do functional programming (with Spring). The tech lead was a bit flummoxed when I asked why mocks are not used in functional languages and continued to think that 'mocking functions' is the correct way to do TDD.
A whole bunch of work spent for no benefit or negative benefit is pretty common.
Real object where practical, fakes otherwise, mocks only as required for exceptional circumstances.
Works great in interconnected monorepos where you can provide high fidelity fakes of your services for other integrating teams to use in their tests. Often our service fakes are literally just the real service wrapped with an injected fake DB/store.
It sounds like it's more about wrapping awkward APIs instead of calling them directly. Yes, this is good advice! But it only tangentially has anything to do with mocking.
If you care about being alerted when your dependencies break, writing only the kind of tests described in the article is risky. You’ve removed those dependencies from your test suite. If a minor library update changes `.json()` to `.parse(format="json")`, and you assumed they followed semver but they didn’t: you’ll find out after deployment.
Ah, but you use static typing? Great! That’ll catch some API changes. But if you discover an API changed without warning (because you thought nobody would ever do that) you’re on your own again. I suggest using a nice HTTP recording/replay library for your tests so you can adapt easily (without making live HTTP calls in your tests, which would be way too flaky, even if feasible).
I stopped worrying long ago about what is or isn’t “real” unit testing. I test as much of the software stack as I can. If a test covers too many abstraction layers at once, I split it into lower- and higher-level cases. These days, I prefer fewer “poorly” factored tests that cover many real layers of the code over countless razor-thin unit tests that only check whether a loop was implemented correctly. While risking that the whole system doesn’t work together. Because by the time you get to write your system/integration/whatever tests, you’re already exhausted from writing and refactoring all those near-pointless micro-tests.
You make it sounds as if the article would argue for test isolation which it emphatically doesn't. It in fact even links out to the Mock Hell talk.
Every mock makes the test suite less meaningful and the question the article is trying to answer is how to minimize the damage the mocks do to your software if you actually need them.
def test_empty_drc():
drc = Mock(
spec_set=DockerRegistryClient,
get_repos=lambda: []
)
assert {} == get_repos_w_tags_drc(drc)
Maybe it's just a poor example to make the point. I personally think it's the wrong point to make. I would argue: don't mock anything _at all_ – unless you absolutely have to. And if you have to mock, by all means mock code you don't own, as far _down_ the stack as possible. And only mock your own code if it significantly reduces the amount of test code you have to write and maintain.I would not write the test from the article in the way presented. I would capture the actual HTTP responses and replay those in my tests. It is a completely different approach.
The question of "when to mock" is very interesting and dear to my heart, but it's not the question this article is trying to answer.
It makes so much more sense this way:
- users don’t have to duplicate effort of writing more or less same mock/stub of the same library
- mock is forced to be up to date, because the library’s maintainers probably pay more attention to their release notes than you do
- no one knows the best choice of mocking approach/DSL than the library’s authors
- the adoption of inversion of control / dependency injection and comprehensive testing could be so much wider if writing a test didn’t require first solving an extra problem if how to mock something with complicated API and if it worth solving in the first place
gnabgib•2d ago