Imho, the biggest problem with Test Driven Development starts with the name. It should be called Spec Driven Development. Because tests are a formal, verifiable implementation of your spec, but they are just that - an implementation. The spec comes first, it is a list of **BUSINESS** requirements that your app needs to implement. How the app does is up to you. It is for that reason that code coverage metric is stupid at its core. You care about your app implementing every business requirement, not how it does it. The spec should never break from code changes, because the code NEVER changes, unless as a follow-up to the spec change (due to business requirements change). Or as a refactoring, but refactoring by definition is not allowed to break the spec.
@frank-michaeljaeschke47983 жыл бұрын
The hunt for a specific test coverage itself is nuts. It should only be used as an indicator, which part of the system is lacking tests yet. About the name problem with TDD, thats why Dan North came up with BDD (Behaviour Driven Development). As the "tests" are more specifications that describe the wanted behaviour of the system.
@GrantNeufeld3 жыл бұрын
I think one of the mistakes opponents of TDD make is thinking TDD means starting your project with granular unit tests (“test_add_one_to_two_gives_three”). Effective TDD starts at the feature level (or higher), then from there drills down to the unit level. Acceptance test oriented BDD seems to me to be the current optimal approach to TDD.
@victorceju3 жыл бұрын
Another consequence in Excessive Setup and other antipatterns the author mentioned is how long the tests execute. An excessive setup, especially in integration tests, and when you have an excessive amount of dependencies in a class can make the test take way longer than it should. That's why not only a good separation of concerns is important, but also and a library of test data to keep them simple.
@adipratapsinghaps2 жыл бұрын
I have been doing development driven tests all my life. Thanks for setting my psychology straight. Please, demonstrate TDD with some practical example. Not talking about some homework app. But a real application but simple. Maybe something as simple as expense sharing application.
@danielpacak65772 жыл бұрын
This is very helpful to give names to all these antipatterns and refer to them in PR review comments. I see them all the time and get tricked from time to time as well
@sir_serhii3 жыл бұрын
So true! thanks!
@edekara3 жыл бұрын
Thanks Dave. Have any of you (reading comments & Dave) encountered a situation where you had to write an integration test that tests the functionality of the whole system in addition to Unit Tests and usual Integration tests (like a Repository IT etc.) ? I have encountered these tests being called like a Black Box test, since they wanted to slip writing a system test that needs to mock lots of other services & uses lots of computing power they created this Black Box test. It has a few of the problems mentioned in this video, but still feels like it delivers the value we expect from it "ensuring everything works together as a whole"
@josuekula2 жыл бұрын
I do not agree with passing tests without assertions. There’s no test without asserting expressions. What about coverage ? Correct me if I’m missing something
@nickpll3 жыл бұрын
Hi Dave. I'm a big fan of tdd and I have found dependency injection to be the best way to approach most tdd scenarios. From reading online I can see DI has a lot of bad rep. What are your thoughts on DI?
@aaronmcadam2 жыл бұрын
DI doesn't have a bad reputation, but standard DI tools do. They tend to be really complex and grow into massive definitions of all your dependencies. That might sound useful, but it really isn't in practice! I don't think we really need tools to implement DI, so keep using the simple approaches Dave and others demonstrate!
@petrnechvatal42513 жыл бұрын
The title is misleading it implies what can happen when TDD is done wrong, but then with each antipattern "this really doesn't happen when using TDD". So it really doesn't have anything to do with TDD it just shows some common problems in tests.
@ShikaIE3 жыл бұрын
Agree. I was expecting some common pitfalls ppl do that makes TDD bad, but it isnt! It's obviously misleading when most of the anti patterns causes are "not writing test first" which means the issues are not applying TDD in the first place
@myusernameislongerth3 жыл бұрын
good video. whoever designed this subscribe ding ding bell should be punished. annoys on so many channels:(
@MehranHalimiAsl3 жыл бұрын
Loved the video, but in my opinion these are all true for any form of automated test, whether it is TTD or not!
@wayneschroder66433 жыл бұрын
I get the impression he meant that these are anti-patterns of unit tests in general, and TDD is meant to address them.
@-taz-3 жыл бұрын
Automated end-to-end system tests? Unit tests are a very specific case, more accurately called specs (specifications) than tests, because they must be by necessity 100% reliable. System tests cannot be reliable, but they can sure be automated.
@josuekula2 жыл бұрын
Again if relying too much on coverage creates a TDD “anti pattern” meaning coverage is not synonymous of correct TDD why should tests not being highly coupled with the code? This is not a contradiction, this is an inversion of supporting arguments. I believe test should be highly tied with the code. If not, we are testing something else not the code
@cypher90003 жыл бұрын
I've always hated TDD, personally, because the way I write code is by iterating a lot until I reach the final design/layout/structure. With TDD I'd be forced to come up with the final solution before I even know what it will be. I write tests after I've completed a piece of code and I don't feel like it's any worse than code written under TDD. Probably even the opposite because I've been testing every single thing as I go, instead of having to worry about constantly fixing my tests.
@cartsp3 жыл бұрын
You are not forced to come up with the final solution, not at all. You are forced to think about the interface or API of what you want to test. Then you write a failing test for that. Then you make the test pass in the crudest way possible if necessary. Then you refactor until you have the design you want, in small steps, with the test helping you to ensure your refactors are working. Red, green, refactor is the pattern to get into. If you are having to fix your tests all the time it is a sign they are too coupled to the implementation details rather than the interface/API.
@lagerbaer3 жыл бұрын
There's a 2017 conference talk by Ian Cooper about TDD, where it all went along, that addresses your concern. Lovely talk, go check it out. You are completely right that you shouldn't write tests that tackle design aspects that aren't even finalized yet.
@-taz-3 жыл бұрын
I think you missed some fundamentals here: Write the test first. Test is singular. Next, make it pass. Then write more software. This way, you know the first part continues to work while you work on the next part, and so on. You're not playing whack-a-mole. If you design first, then write the tests after, you have totally missed the point, and value, of TDD. Also, "final" is bullshit. Unless the software is dead, there is no final design. And if it is dead, go do something else? I think "testing" is a misnomer in "unit testing" (and maybe so is "unit"). It has nothing to do with testing. It's verifying the contracts of what you have designed so far.
@MatthewChaplain3 жыл бұрын
For me, it's the other way around. By specifying what I want up front in terms of "code I wish worked", I ensure that no more of the solution than I actually need is present in each iteration, and so the design emerges over time. Usually I have a vision for how it'll work in the long run, and it ends up being slightly different, but that's as a result of the learning I gained through the process.
@theyawster3 жыл бұрын
TDD is extremely useful for iterative development. If I have an idea of what the result should look like but I'm not quite sure how to write it, I can create unit tests that reflect that. I also would break things down and implement them piecemeal.
@JamesSmith-cm7sg3 жыл бұрын
The biggest mistake is thinking that unit tests are micro tests where only a single class needs testing and everything else is mocked.
@BrendonParker3 жыл бұрын
Maybe someone can help me understand. I feel like following TDD as described here will result in very small unit tests, which is great. But what about how these “units” “integrate”? Are writing integration tests a part of TDD? Even on a small scale, you can unit test the mess out of it but still have breakdowns in how these small components integrate with each other. Which I feel lends itself to then writing tests which violate most of the best practices laid out here.
@-taz-3 жыл бұрын
If (or when) there is a breakdown, as you mention, it will either be caused by some *external* problem, such as a site is down or cable is loose; or there might be an unexpected change in some outside service. Then, your software will have to report the situation and/or prevent the process from continuing in a failed state. Handle it gracefully, without becoming an even bigger problem. Your unit tests have verify that behavior for non-happy cases, and so it might require a new regression test and change in logic. It is especially important for unit tests since they can simulate any software or hardware failure, whereas system tests would need to have an actual failure (e.g., physically unplugging a cable would be necessary, and that's not feasible!) Or, obviously, if the source of the breakdown is some *internal* bug or misunderstanding, that will also require a fix plus new regression test(s). There are also larger integrated tests, but those should never have to discover a bug *in the software*. If every path of logic is tested, it will be unlikely. Integration or system tests must test that each of the modules involved can still integrate. Check out the test pyramid articles from Martin Fowler and Mike Cohn's "Forgotten" layer, too. If two or more modules are in the same source code base, then it is possible to unit test -- to only test the logic in a purely mathematical way -- their integration with no contact to the outside world or chance of random errors. It requires more creativity and skill, but since there are more people involved, the hardest part, by far, is politics!
@BrendonParker3 жыл бұрын
@@-taz- Thanks for the insight. I'm familiar with the test pyramid but will revisit those Fowler and Cohn articles. Your last point is exactly where I get hung up. It would seem to me that style of test would almost certainly lead to violation of some of the points made in this video. Namely "Excessive Setup" and "Mockery".
@-taz-3 жыл бұрын
@@BrendonParker Astute point. "*Almost* certainly" is right. Forget TDD for a moment, and just consider languages. Imagine we have a large assembly language program in which we must try to identify all the for-loops. It's going to take awhile, right? If you're an ASM whiz, you might do it efficiently, but you'd still have to do it. The language has no concept of a for-loop (nor even indentation). Anything can be anywhere, and everyone does it differently. It'll sure help if you wrote the code yourself, and therefore have the mental map already. But still. On the other hand, identify all the for-loops in a large high level language program. Easy! Ctrl+F "for." It doesn't even matter how large the program is. Smalltalk, which originated today's unit testing, probably made it relatively easy to inject dependencies, as does its much younger cousin Javascript (who only dresses up like a C++). But in most of today's languages, the mechanisms required to pull off inversion of control are horrendous. In C++ there is virtual polymorphism with smart pointers (for seasoned veterans only) and/or template metaprogramming (for ascended beings of pure light). Of course there is no agreement, and no standard. My coworkers who are essentially software cavemen actually found a way to use the C preprocessor to inject test fakes, much like opening a can of peas by bashing it against a jagged rock for a long time. Even those who don't understand TDD yet are well aware of their APIs: function names, parameters, and return values. What value TDD provides above that is, despite what most people say, testing our implementations. It's just that we want to test the pure business logic without any dependencies (timers, I/O, RNG, networks, threads, IPC, etc.). But most languages just don't have a clean way to denote dependencies or DI. All that mockery is, like the assembly language for-loop -- copious amounts of noisy boilerplate. The strategy is sound, if somewhat fuzzy to most, but the tactic belongs on Dirty Jobs with Mike Rowe. According to Sun Tzu, an army can succeed with strategy alone... but just not against an army with better tactics. Most developers in most languages need better tactics. Oh, and the standard libraries, and libraries in general, are about as considerate to dependency inversion and unit testing as the other competitors in any online deathmatch. I have created a useful technique in C++ to augment doctest. If I call a dependency in my module under test, such as unix function `write(buffer, size)`, I spell it as `seamed(buffer, size)`. The production module (when not compiled for unit testing) will call `write` directly, and that extra step will optimize out as a zero marginal cost abstraction. The unit tests then, require me to either declare that I'm using the production implementation directly (which, BTW, I would never do with `write`), or declare a fake (where a lambda supplies a fake implementation), or declare a stub with just a value that's always returned. The call counts and argument lists can be inspected by test assertions, if desired. My test double framework will never call a production-level dependency unless I declare it in the test. Everything is clear because there's practically no boilerplate. No noise. Every character is doing the minimal thing that needs to be done. And if you know how doctest (or, similarly, Catch-2) works, it has a superior structure which obviates the need for mocking in general. My production declarations, fakes, and/or stubs fit right into that structure, too, only working the subtest scopes where they are necessary to the test outcome, and notifying me when anything is missing, like it's on tracks. As to your question, and your point, that's an option. If I fake/stub every seamed dependency that is called by the module I'm testing, it's a higher value unit test. More modules can be pulled in, assuming there is integrated business logic I want to integration test. But I would mark those as lower value integration tests, or put them someplace else entirely separate from the unit tests. All unit tests must always execute before any integration test. That's where the pyramid comes in. Integration tests would tend to issue false negatives. But to use 100% pure functional integration tests, which can compile and execute so rapidly (thanks again doctest!) is better than having to system test with actual hardware and even the tiniest amount of unreliability. Plus, it's easy to simulate (fake/stub) failures, which can't really be done with actual system tests.
@davidstocking54083 жыл бұрын
I didn't disagree that these are problems when you do TTD incorrectly. However, I feel like these are the bottom of the barrel as far as anti-patterns. Most developers would have been able to call these tests bad even before watching this video. I was hoping for this conversation to include th Fragile Test Problem considering it seems to be the largest valid criticism to what most people consider TTD (1 test per class). I find 1 test per class to be a little to fine grained and to cement the architecture of a program in place indefinitely. Hoping there is more information on this topic in the future.
@urbaniv3 жыл бұрын
Check out his channel. He talks in depth about strategies to avoid that (his 4 layer architecture, which abstracts the behaviour from the implementation so that those test stays stable even if the implementation details change)
@lmb_codes3 жыл бұрын
(1) design the API/abstraction (2) create tests from a client POV (3) implement!
@-taz-3 жыл бұрын
How can one "design an abstraction"? An abstraction is derived naturally from concrete instances -- which are the use cases / tests / specs. Your #2 would need to be first.
@steveroger45703 жыл бұрын
Some company, team and senior believe TDD will slow development to the point they will stop anyone who dare trying to test their code. There is nothing you can do to help, it is the industry standard to write spaghetti code and spread the mess to revenge the management for each company they switch.
@godblessCL3 жыл бұрын
I dont like TDD because use the premise you cannot design first, code and test ... and I can.
@robertluong30243 жыл бұрын
I hate this video. WRITE TESTS FIRST AND IT WILL SOLVE EVERYTHING. I'm having trouble getting through this video. Sounds like a broken record and wastes my time.
@JamesSmith-cm7sg3 жыл бұрын
"Solve everything" is what you've assumed
@75yado2 жыл бұрын
I think you know where the problem is... In your head
@rafalborowiak72352 жыл бұрын
You are giving useless advice like "design for testability" and "write tests before code". Yeah we all have heard that but repeating these phrases isn't helpful, this doesnt tell anyone HOW to do this.
@mx-hx2 жыл бұрын
The advice isn't useless if you figure out the practical details on your own. If you can't, you need a book for beginners. Don't expect a free video for a community of professionals to teach you the basics of your job, and then get mad when it doesn't.