DON'T CHASE TEST COVERAGE!

  Рет қаралды 24,339

Continuous Delivery

Continuous Delivery

Күн бұрын

Пікірлер: 104
@markovichamp
@markovichamp Жыл бұрын
Bill Venners (creator of the ScalaTest framework and author of Inside the Java virtual machine) said he chose the term Spec over Test to emphasise the idea of "specifying the behaviour of the code rather than testing it." He also noted that the term Test can be misleading because it implies that the goal of testing is to find defects, whereas the goal of specifying behaviour is to ensure that the code meets the requirements and behaves as expected.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
Yes, certainly, but this idea of test as specification pre-dates ScalaTest and Scala 😉BDD says the same things, and was created before 2007 (Scala Test). Not sure if Kent Beck uses the word spec, but certainly describes the importance of focusing on "Behaviour" rather than implementation in his TDD book.
@tongobong1
@tongobong1 Жыл бұрын
@@ContinuousDelivery I wonder what do you propose for teams working on large code base that don't do TDD and have less than 10% coverage. Should they write more unit tests or it is better to continue without unit tests?
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
@@tongobong1 I have a video on that topic here: kzbin.info/www/bejne/lZjWZICgf6d4nNU Retro-fitting fine-grained TDD is risky and expensive, so you need to do it tactically to support active areas of work, while stabilising more broadly with other types of test.
@tribble1
@tribble1 Жыл бұрын
Based programmers gave management exactly what they paid for.
@troelsirgens-mller9922
@troelsirgens-mller9922 Жыл бұрын
Test coverage is great for proving the absence of tests. If I’m onboarded on a project with single-digit test coverage I know what I’m dealing with, and it’s not pretty.
@llothar68
@llothar68 Жыл бұрын
Well at least you already have the tools in place to get the reports. But i get your point. It also helps to find out if a designer is lazy and just deletes the red tests instead of fixing them 🤪
@Broniath
@Broniath Жыл бұрын
@@llothar68 If you did this where I work you would probably be ousted from the team or something lol.
@NathanHedglin
@NathanHedglin Жыл бұрын
FALSE. Best Buy has 100% code coverage. . .all the tests just make sure no exception is thrown
@ApprendreSansNecessite
@ApprendreSansNecessite Жыл бұрын
In smaller projects like libraries, test coverage is good for detecting dead code typically left out after a refactor
@michaelpastore3585
@michaelpastore3585 Жыл бұрын
The worst code base I ever worked on probably had the most tests. They weren't GOOD tests, and they broke left and right because the code was an unmaintainable mess, but by god the test coverage metrics sure looked good.
@vanivari359
@vanivari359 Жыл бұрын
Yeah, joined a project as firefighter for a customer, which was super obsessed with his 80% coverage, but the code base for multiple applications was basically no longer maintainable. I analyse code for almost 20 years now on a regular base, but that code base almost made me cry out of desperation. The first application we looked at, had zero internal structure, no consistency, you changed one thing and 10 other things broke across the code base and you fixed those and now you had 30 more compile errors. At the end, almost every class in the system had to be touched during the refactoring because it was all one big ball of Spagetti. And of course almost every single one of hundreds of test cases broke too due to thousands of lines of copy-pasted mock code. Every single method in that mess was locked down with multiple tests all mocking all dependencies of the large "service class" containing that method. They even changed some code (e.g. visibility) to reach another x% coverage. And after fixing all of that mock code, everything was green, but several features didn't work after starting the application because while the test suit tested every method in isolation, there was no test, which actually tested the functionality in combination. And all the other applications looked the same, some worse. One application implemented every single business validation in the frontend, all teams suffered from "flaccid scrum", had tons of bugs and even basic changes took longer and longer - but all of them had a coverage above 80% for frontend and backend. Of course, they also relied heavily on many people (including the annoyed end user) constantly manually testing that mess because the test suite did nothing. Meanwhile, the project manager, who staffed 6 developers with 0-2 years of experience as team to develop that application, had the audacity to complain that his company extended the developer career path - "why on earth should i pay a manager grade developer, what value could he add!!!". Yeah, no idea. Our industry is really one of the worst - imagine, people would build actual houses or planes or cars like this.
@jangohemmes352
@jangohemmes352 Жыл бұрын
​@vanivari359 I feel your last bit, it's pretty astounding the lack of quality this industry gets away with, simply because there's no visibility for outsiders when it comes to code quality. Clients, management etc. just don't know how things are and how things could be. We get away with rampant incompetence.
@PaulFrischknecht
@PaulFrischknecht Жыл бұрын
"testing is not an afterthought and somebody else's job... treating it like this has been a huge mistake for the software dev. industry" so true...
@jordanpavlic9745
@jordanpavlic9745 Жыл бұрын
Finding code paths that were missed is probably the real value to test coverage. It points you right to undiscovered bugs.
@marlonsubuyu2012
@marlonsubuyu2012 Жыл бұрын
Isn't this about white box testing?
@jordanpavlic9745
@jordanpavlic9745 Жыл бұрын
@marlonsubuyu2012 I've never heard it called "white box testing" before. This was just what I discovered worked well. Making test code know nothing about the details of the code being tested means 1.) Your tests align with user experience ( a user of a function doesnt care how the function works, but does care if the behavior is right e.g. sum(1,1)=2 vs sum(1,1)=99 ), 2.) Code will only need changed if the requirements change (the code only depends on the tests which only depend on the requirements), and 3.) It means that all valid solutions to the requirements will pass regardless of what the code looks like (means refactoring will never break a test unless you change the public api)
@PaulFrischknecht
@PaulFrischknecht Жыл бұрын
Very true stuff in here. Having tests that just provide coverage is still better than no tests at all, because at least you have to make all code executable in the context of tests, and you will probably put some dependency injection in place...
@thewiirocks
@thewiirocks Жыл бұрын
An interesting example of your argument that accidentally disagrees with some of your details was a system I developed over a decade ago. We used test cases to develop our APIs and services. However, the input/outputs were fairly complex to where we didn't understand how to create useful assertions. Rather, we found that the clockwork code was likely to throw an exception if there was a problem, so we ended up testing by failure. And when we worked in a particular area, the test was a harness for development where we would examine the inputs and outputs to ensure it was doing what we wanted. This proved to be surprisingly effective, giving us a clean and highly reliable system. The coverage levels were also accidental, but hit that nice 70-80% without issue. A few years later I had a team at another company that would bend over backwards to get 100% coverage by fuzz testing APIs and inserting fragile JMock tests. I told them to knock that off and focus on functional tests that proved the APIs. Code quality improved even though coverage dropped to 70-80%.
@gonubada
@gonubada Жыл бұрын
Really good point. I found several missing tests and also bugs using mutation testing. Expect the unexpected 😅
@StreetsOfBoston
@StreetsOfBoston Жыл бұрын
I totally agree with your video, Dave. Test coverage is a great tool to spot *trends* in your coverage. But it should never be the focus, because that'll take focus away from actually writing good tests. Come to think of it, test coverage is like story points in Agile dev. Great for spotting trends, have some predictional value, but they should never be the goal, never be the value delivered.
@SimoneLivraghi
@SimoneLivraghi Жыл бұрын
When I started with TDD I used the coverage as target and sometimes I spent a lot of time trying to test a slice of code without hitting the coverage. I solved the problem by changing my point of view. Before I asked myself "why can't I test it". After the question becomes "why are these lines of code existing". Maybe TDD is a very zen mindful set and when you fall in the temptation of moving up some untested lines of code you can create untestable code that is untestable 'cause it is not useful or it is unreachable.
@-Jason-L
@-Jason-L Жыл бұрын
If you are using TDD, how can you be trying to figure out how to test a slice of code? In TDD the code being tested doesn't exist yet. Maybe I am misunderstanding what you wrote?
@jimhumelsine9187
@jimhumelsine9187 Жыл бұрын
I use code coverage of an indication of code that it is not tested. I use it mostly with legacy code. In my mind, it's a tool, not a metric. I prefer behavior coverage, more than code coverage, but behavior coverage is much more of an art to achieve, and not easily measurable. If using behavior coverage, and there's code without coverage, then there's either behavior not accounted for in the test or it's dead code, and a potential candidate to be removed.
@andrewsutton1657
@andrewsutton1657 Жыл бұрын
I worked on a program that had many of these problems, and Dave is correct... the only way we fixed test coverage was a cultural change. Encouraging the development team to have an understanding of what the testing was for and not gaming the metrics...
@Flamechr
@Flamechr Жыл бұрын
my idea of a test is in the automation world. 1: unit test modules/classes that have frequent uses. 2: integration test "write a feature test" separated the software from the hardware. 3: system test we usually don't do that there is often a system test team. Never really have code coverage more like a measurement of support cases and uptime of the product. When incidents happen a integrations test case is written that covers that arear.
@RossOlsonDotCom
@RossOlsonDotCom Жыл бұрын
Please reduce the amount of background movement in the videos. I literally have to close my eyes in order to get through this.
@onursahin7970
@onursahin7970 Жыл бұрын
I think we can say instead of "test coverage != good tests" it's more like "test coverage < good tests". Which means a test coverage is not sufficient for good test suite but a necessity of a good test suite.
@thought-provoker
@thought-provoker Жыл бұрын
Test Coverage is a useful negative Indicator, but not a useful positive Indicator. If anything, it only tells us that most certainly, people didn't use TDD ... A _low_ test coverage informs us that code changes will most likely be high risk, and fairly difficult. But we can derive absolutely _no information_ about areas with _high_ test coverage. Unfortunately, many decision makers and even developers don't understand this difference, so they pursue measurable targets that don't involve the use of TDD ... 🤷
@tongobong1
@tongobong1 Жыл бұрын
Exactly. Unit test should exist to test functionality (the unit of functionality) and not to test some code - method or class just to get coverage.
@kevink3829
@kevink3829 Жыл бұрын
I found this interesting. I may have missed it in the video, but what measures/indicators are there to know that there is "good" test coverage and we need indicators other than bugs found in production as that is too late. We have an innersourcing model where other teams can contribute code (contributor/maintainer model). We've used code coverage % in the past to at least indicate teams are writing tests to cover the code. Thanks for your response in advance.
@mrpocock
@mrpocock Жыл бұрын
My ideal testing framework would be one based upon annotating functions and components with assertions (invariants, postconditions, etcetera), and then have the compiler prove the bits it can, and emit tests for the parts it can not. At least at the low level, most things you want to test are small units of logic, like that parsing a pretty print of x gives you back x. I would be happy to drop back to manual test authorship for system-level testing. The automated approach has the benefit that it is much easier to spot when a change to a contract but not to an api is a breaking change. Sadly, the automated tooling "in the wild" tends to be aimed at theorem proving nerds in computing science research teams rather than practical developers.
@cronnosli
@cronnosli Жыл бұрын
I like a lot to work with TDD and them make enough tests that take care of the test branch coverage. I think this give a really nice near real coverage. This could also let you discover unused basecode.
@-Jason-L
@-Jason-L Жыл бұрын
If you practice TDD, how is it possible NOT to have 100%? If you don't, you write code not driven from an identified behavior. This almost always stems from devs writing code to pass a test, and continuing writing code without writing another test. They literally outrun themselves.
@NathanHedglin
@NathanHedglin Жыл бұрын
Identified behavior based on the user story. Tests are just more code. They do NOT make the software any more correct
@pixelslate9979
@pixelslate9979 Жыл бұрын
I don't get it either. @ContinuousDelivery please elaborate!
@OllyShawlive
@OllyShawlive Жыл бұрын
Dave even reluctantly admits this in the summary. If you have high test coverage you're either gaming coverage OR doing TDD well. The video concentrates too much on the gaming aspect and suggests that high coverage is a bad thing. It's like saying you can do well on an exam by cheating. Sure, of course. But someone who has studied hard and loves a subject would do well on an exam - that's a good thing!!
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
My point is that it is not the point of TDD to achieve 100%, and if you do that still tells you nothing (or at least very, very little). The advice for practicing TDD is to not write a line of code unless it is "demanded by a failing test", that is good advice, but this is engineering, not a religion. There are times when pragmatically it can make more sense to disregard the advice. For example, UI code is tricky to do pure TDD for. The best approach for UIs is to design the code well, so that you maximise your ability to test the interesting parts of the system, and push the accidental complexity to the edges and minimise and generalise it. So if I am writing space invaders, When the bullet from my ship hits an invader, I want the invader to be destroyed. I could separate all of this, through abstraction, from the problem of painting the pixels on the screen. Make rendering the model a separate, and distinct part of the problem. I would certainly want to test as much of the generic rendering as I can, but there is a law of diminishing returns here. A more technical example, concurrency is difficult to test, but using TDD is still a good idea, but there are some corner-cases that you can hit that may just not be worth it. My expectation is that very good TDD practically, pragmatically, usually hits in the mid 90's rather than 100. Not necessarily anything wrong with a 100, but hitting 100 for the sake of it tells you nothing useful. Aim to test everything, but don't agonise of the last few percent if it doesn't add anything practically. That is what ALL of the best TDD teams that I have seen have done. That may be a function of the kind of code they were working on, sometimes close to the edges of the system, and (this is a guess) nearly always about trade-offs around accidental, rather than essential, complexity.
@pixelslate9979
@pixelslate9979 Жыл бұрын
​@@ContinuousDelivery What about double booking? And the question why a line of implementation code is existing at all? When the existence of this line is not demanded by existing specification in form of a e.g. junit test then everybody is able to change the behaviour by changing or deleting the line and your testing suite still be green. And by the way I'm pretty sure e.g. mutation tests would find such holes in the suite too.
@erikmeinders1711
@erikmeinders1711 Жыл бұрын
Thank you Dave for this good explanation.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
Thank you 😎
@fishzebra
@fishzebra Жыл бұрын
You can hit 80% coverage by using the Code coverage suppression attribute, not a joke!
@nikolaypopov9509
@nikolaypopov9509 Жыл бұрын
Do you think TDD could be implemented in gave dev (1 man personal project), especially when the line between prototype and building the final thing is pretty blurred? I find it challenging to even figure out how to do test the things I care about in Unity, let alone doing TDD.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I know that it can. This is a fairly common question for me, and I am currently working, as a side project in my spare time, writing a simple space-invaders style game in Python with TDD. The idea is to use this to demonstrate the techniques of isolating the "pixel painting" from the "game logic" so that you can thoroughly test the game logic. Don't know when I will release it yet though. The game with TDD is easy enough, but turning that into a video or a training course is a lot more work.
@tongobong1
@tongobong1 Жыл бұрын
The problem with TDD is that it is not applicable to every code - the code that uses a lot of random data where you should see the final result, graphic rendering...
@mathalphabet5645
@mathalphabet5645 Жыл бұрын
I don't understand the part where critique is aimed at test that fail when something is changed. I thought that unit tests are supposed to be highly coupled to the unit they test. If i test some encoding that adds dashes for spaces and change that to underscores, a test should fail. My confusion probably comes from not understand the context of the change that doesnt break the test, or how many tests. I would appreciate if someone can share some insight.
@unrulyObnoxious
@unrulyObnoxious Жыл бұрын
One situation that I can relate to within my company's codebase is that people were asserting error messages as well. This makes the test and the unit unnecessarily coupled, where if you wanted to change/improve an error message, a test would break.
@tongobong1
@tongobong1 Жыл бұрын
Unit test should be a black box test that knows nothing about the implementation of functionality it is testing. There is the London unit testing style that is coupling unit tests to the classes they test and that style is just terrible. Use the classical unit test style.
@BroileR2007
@BroileR2007 Жыл бұрын
I don't understand, how can code coverage be 80% or anything below 100% if the TDD is consistently applied? Why do we say "over 95%" is trying too hard? Following TDD means you don't write the code until you have a failing test which requires that code to be written. So where did those untested 5% come from?
@vikingPotes
@vikingPotes Жыл бұрын
Let's say you are coaching a team to become TDDers. How would you work with them to decide how they are doing? I find that metrics, when done well, help a team improve. What type of metrics would you recommend to a team trying to get started
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I would recommend that you use "Stability & Throughput" as more effective metrics with a proven link to better SW dev performance on a broad category of measures. Stability - Change Failure Rate & Mean time to recover from failure Throughput - Time from commit to 'ready to release & frequency of release Optimise for those. You can track coverage and discuss it, but you won't get team buy-in for testing if you force, or reward this as a target.
@heyyrudyy404
@heyyrudyy404 Жыл бұрын
Isn’t example-driven development or REPL-driven development better overhaul approch than TDD ?
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
No, I think they are very different things.
@PaulFrischknecht
@PaulFrischknecht Жыл бұрын
gotta love "building worse software slower", much better than "building better software faster"
@llothar68
@llothar68 Жыл бұрын
Some code parts are almost impossible to test and needs careful thinking and design and praying. I'm talking about all the error handling code for things that could go wrong. A variant of defensive programming.
@jkf16m96
@jkf16m96 Жыл бұрын
I usually try to avoid exception handling to be honest, and just return a Result object each time from my methods. It seems to work really well to test for errors, even though you can assert for exceptions... Honestly I don't like when the flow goes somewhere or is catched inside an empty catch with no effect. Sometimes I hate programming.
@-Jason-L
@-Jason-L Жыл бұрын
If you can't create the failing preconditions, they are not required to be coded for. In TDD, those guard clauses would never be coded if there wasn't first a failing test. YAGNI
@judas1337
@judas1337 Жыл бұрын
If someone can help me understand; how do you measure test quality? In the video Farley seems, to me, be suggesting that test quality is too hard to measure and that by practicing good TDD you don’t have to measure this because the practice will create high quality tests. Or are we focusing on the wrong thing by looking at the quality of tests?
@NathanHedglin
@NathanHedglin Жыл бұрын
😂 manual testing against the actual requirements OR write tests for your tests. . .
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
TDD has "tests for the tests" built-in, that is why we run them and see them fail!
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
Yes, "test quality is too hard to measure" at least automatically, and test quantity tells you next to nothing, certainly nothing about test quality. You want lots of tests... but I would take less coverage with good tests rather than more coverage with crap tests every day of the week. How do we get "good tests"? Not by inspecting them after the fact, but by building the "good" into the way that we create them - as a function of the dev process. Test the test by running it and seeing it fail, before you write the code that makes it pass!
@haskellelephant
@haskellelephant Жыл бұрын
Tests without assertions is a problem, but they aren't technically completely useless. It checks that you don't get any unanticipated exceptions, segfaults, non-termination etc. The fewer guarantees the language provides from static checking, the more simple test coverage of any kind helps.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
That's a VERY LOW BAR for quality - "it doesn't blow up" 🤣
@haskellelephant
@haskellelephant Жыл бұрын
​@@ContinuousDeliveryYeah, definitely a low bar, and I am certainly not advocating it as a practice. Personally, I am not much interested in the aggregate number, but rather wether I have either accidentally missed some cases or have any dead code. That being said, 80% means 1 in every 5 lines of code could just blow up every time which is a lot to keep on top of.
@sneibarg
@sneibarg Жыл бұрын
Our senior executives are pushing some initiative to start unit testing Spring configuration classes with a bunch of bean declarations. Seems like busy work to me.
@judas1337
@judas1337 Жыл бұрын
So do you then either follow orders and do something harmful (to the organization and/or the users\customers), or you speak up and refuse? Your comment got me thinking where the line goes for just following orders between it’s the organization’s responsibility to it’s also your responsibility? Like with the engineers in the VW emission scandal.
@sneibarg
@sneibarg Жыл бұрын
@@judas1337 The reason we would have to write tests for those things is the ask is for there to not be certain types of sonar exclusions. I will definitely continue to suggest it's a wasteful effort.
@salvatoreshiggerino6810
@salvatoreshiggerino6810 Жыл бұрын
I'd just chase the coverage. I recently got fired, partly for making a big fuzz about getting people to test their code, but not providing any visible improvement to management in terms of coverage stats. I know, you know and Dave knows that chasing coverage is pointless, but I have no credibility because I don't have an MBA.
@sneibarg
@sneibarg Жыл бұрын
@@salvatoreshiggerino6810 Easy enough for true microservices, but the monoliths have a looooooooot of beans.
@tongobong1
@tongobong1 Жыл бұрын
@@salvatoreshiggerino6810 I was also fired for tying to save a year of useless work long time ago. I was right - the project was total waste of time and it was stopped after almost a year but it had a purpose of getting money from customer - it was a fake project for an excuse for financial transaction. Since then I don't argue much against bad ideas from management.
@johnpricejoseca1705
@johnpricejoseca1705 8 ай бұрын
I worked for a wise boss who understood these sorts of disfunctions. He had a saying, "You get the animal you select for." The key is selecting for the right things. That's the tricky bit, no? :)
@honestcommenter8424
@honestcommenter8424 Жыл бұрын
I agree that hitting 100% coverage is difficult, but I did really hit that with actual assert tests many times, it isn't impossible.
@pierrehebert9743
@pierrehebert9743 Жыл бұрын
What people tell me code coverage is for: ensuring that you are testing every edge case (but what if you have explicitly undefined behaviour? Should you really be testing that?) Me: oh look, a tool that's sometimes better than a stack trace at finding out why I get exceptions, but still faster than manually debugging!
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
How does software intentionally have "undefined behaviour"? If it does, then it is probably a bug. The idea of TDD is not to be perfect, it is to help you to design the code to do what you want it to. To do that, you need to have an idea of what you would like it to do. If you can't define what you want it to do, you aren't ready to write the SW, to to teach it if it is an ML system.
@pierrehebert9743
@pierrehebert9743 Жыл бұрын
@@ContinuousDelivery If you consider a mathematical formula, for example f(x) = sin(x)/x, then there may be inputs where the value is undefined. Typically, this would be an error, but for the sake of robustness, one may choose to give it a value, but to explicitly mark it as undefined in documentation. In the above example, f(0) could be defined as 1, which is arguably just as correct as failing, therefore not a bug. In these cases, you would maybe want to avoid testing the special case, because the result could be considered incorrect (depending on the use case), and could possibly change. Additionally, if your test is your design and can serve as documentation, then not testing this path is implicitly the same as leaving it undefined. The mathematical formula example can extend to more practical scenarios as well. For example, if you have code to turn on a furnace when the temperature is below 21 degrees C (and off otherwise), sampling every 300 milliseconds in a house with .05 degree C of accuracy, the difference between < and
@voomastelka4346
@voomastelka4346 Жыл бұрын
My experience has been that people don't know how to write effective (unit-) tests because no-one has ever taught them how to do that and the available examples and tutorials are pretty poor. Typically what you see is checking outputs for some specific inputs, e.g. sum(1,2,3) == 6 or whatever, and the more assertions like this you have have the better. Property based testing (e.g. QuickCheck) is a far better way to test your code and I have no idea why it is so seldom mentioned even though the method has been known for ever and frameworks are available for all major languages (QuickCheck is for Haskell, I believe).
@MartinMadsen92
@MartinMadsen92 Жыл бұрын
Aiming at a certain test coverage is just yet another example of turning a metric into a goal. A metric should never be a goal in its own right, it should be kept separate from the goal to prevent cheating/taking shortcuts.
@hightechsystem_
@hightechsystem_ 10 ай бұрын
Chip development, where an unchecked use case can literally cost millions in a die response / product recall to fix needs extremely thorough testing. More so that field upgradeable software/FPGA. Your key points hold. I would say if you need 100% you need to achieve it first on leaf modules using tdd, then through randomised constrained resting at large module boundaries. Ultimate of design is poor, edge cases will go undetected. Need to improve quality at very first instance by design and good decoupling at problem boundaries. Sometimes you need to hit 100% but you must design your code so you can achieve that without cheating.
@ContinuousDelivery
@ContinuousDelivery 10 ай бұрын
Yes 100% is good as a side-effect and terrible as a goal.
@hightechsystem_
@hightechsystem_ 10 ай бұрын
@@ContinuousDelivery I’ve been watching most of your videos. I don’t see a video on guidance for iso 61508 or other style where you have requirements (possibly formally described, possibly based on safety goals) and you need to refine down towards specifications and implementation. Trying to understand how to integrate requirements traceability in combination with bdd and Tdd.
@hiftu
@hiftu Жыл бұрын
Test coverage: It tells you that you did not do a good job (low coverage). It does _not_ tell you if you did a good job.
@tldw8354
@tldw8354 Жыл бұрын
wait: to make an assertion you have to understand what the code does ?!?!?! does this imply, that most tdd users dont care about the functionality of each line of code?
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
In one sense yes. The goal of TDD is not to "test every line of code", but rather to build good code that does what we want, to do that we drive the development and design from tests. In order to "build good code that does what we want" we need to be clear, in the form of a test, about what we want the code to do, we don't need to, and ideally don't want to, say anything at all about how the code does what it does. So I don't want to test every line of code. However, if I am writing "good code" every line has a purpose, and I may want to test that that purpose is met. This is a VERY important distinction that I think people often miss about TDD. You don't start from the code and then imagine how to test it, you start from what the code needs to achieve, write that down as a "test", and then figure out how to make the code do that.
@PaulSebastianM
@PaulSebastianM Жыл бұрын
Asserting tests is arguably a lot easier than writing mocks... worst thing I experience with writing tests is creating the damn mocks.
@netwolff
@netwolff Жыл бұрын
It's kinda funny (sad) that one of the oldest insights is still overlooked in all kind of areas: Give me a KPI and I will tell you how I behave. Give people money for monkeying around and that is exactly what they will do - and I don't fault them. Pay good salaries and don't use any achievement bonuses. Make sure people like their work and the work environment and care about what they're doing.
@tongobong1
@tongobong1 Жыл бұрын
Focusing on code under test is bad when writing unit tests. We should always focus on functionality that we want to test instead.
@chudchadanstud
@chudchadanstud Жыл бұрын
3:55 - 😂😂😂
@petervo224
@petervo224 Жыл бұрын
OK, kind of disagree here, so allow me to hop in: First, the title should change to: PRACTICALLY, DON'T CHASE TEST COVERAGE WHEN IT'S TOO LATE! The phenomena described is correct only for a specific scenario where it's too late to chase that metric. Developers at that point will merely try to improve coverage with little (if not no) regards for the qualities of the codes and the tests. This not only wastes time of developers, but also increase more burdens in maintaining the tests (not the codes). Second, test coverage is great if it is done PRACTICALLY. Here are some scenarios of practical application: (1) Do it at the beginning before you write your first line of code. It's like paying insurance early. When it's done too late, you are paying the technical debt for not applying it + the compounded interests that debt accumulated. (2) Do it with 100% (especially when doing it at the beginning). Why set the half-hearted goal of 80% when you can do it with 100%? One line of code (which can be 1% to 0.0001%) not covered is a potential technical debt whose price compounded over time, so why leave a chance? Also, it's easier to detect a little red added from the all green coverage than to detect a decrease of test coverage from 81.11% to 81.10%. (3) Do it with fast metrics: Line coverage and branch coverage are pretty fast by most of the tools and frameworks. For mutation coverage, it's slower and less stable, so do it with careful moderation. (4) Do it with zones: This is how you deal with legacy codes that are too late to chase the test coverage. You cannot practically chase for the test coverage of the legacy codes, but you can chase it for the codes that you and your team who practice TDD is about to create on top of the legacy codes. Create an isolated project that references and depends on the the legacy codes. This is your green zone that you can chase test coverage early, totally (100%), and fast. The legacy codes is the legacy zone (if it is too bad, you can call it sh*t zone), that you and your team minimally change it (if need to, mostly just to fix bug that your green zone detects, or improve Open for Extension so that the green zone can extend it better). And not only legacy zones, but there are also many areas where codes are practically difficult to have test coverage - e.g. front-end web UI. So being able to identify and isolate them with lower frequencies of change is the key to apply zones of coverage practically. (5) Do it with code reviews to improve your developers: When your project (or certain zone) is having the 100% test coverage goal (applied practically), it is much easier from code reviews to see which developers disregard the quality and just try to cope with the coverage requirements (their tests would make no sense or be lengthy, and they explain doing so to maintain the 100% cover). You can then correct and improve these developers' attitudes and skills before their contributions may do any further harm to your project's source.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I don't agree I am afraid. The team that I mention in the video, started out with the "80% test coverage target", and then cheated to achieve it. I think that coverage is ALWAYS a poor target, but a good outcome.
@petervo224
@petervo224 Жыл бұрын
@@ContinuousDelivery Hmm... That makes me feel the title "Don't chase test coverage!" is similar to the usual advice at the end of the first introduction class of TDD "Don't apply TDD to your work!" (I recall that at least from Uncle Bob). Test Coverage, TDD,... they are like things that make people excited, but if applied into professional works by the people who do not know them well, tend to create more harm than good. That sounds very like the team you mentioned, where they might lack the sufficient proficiency in test coverage and be under a timeline pressure, hence creating the worse outcome. I believe you already see the same scenario for TDD, where a CTO came back from a TDD talk, commanded his team to start doing TDD from one day, but then found this made works slower and also not much improvement, and botched the whole TDD revolution. Test Coverage, TDD, and many fields of the like, are things that we can obsess over when practicing on free time and on katas; but when doing professional works (mostly legacy codes or codes we do not have full ownership), we only use a small part of what we know. A recent example that I use test coverage on professional works (sorry if it seems to be bragging) is when I created a new feature about authentication for a web product. It was out of the question to try to make that product source's thousand lines of codes to have good coverage. However, I found it easier to isolate the new feature's source that I was about to create, and targeted that source to have 100% coverage, which brought me the benefits to have that feature well-behaved and simply-designed (just a few hundreds lines). And I could only do that after half a year obsessing over test coverage for an off-work project (if I did the same thing but like more than a year ago with no proficiency on that field, it would take too much time). If that's the case, I would agree with the message that you intend, though I do hope the title "Don't chase test coverage!" would not discourage people to be more aware of and explore this area of test coverage. I would slightly disagree on "coverage is ALWAYS a poor target". I think MOST OF THE TIME, especially on professional works, it is a poor target, but when you get into the right scenario where it becomes a good target, the payoff is significant. For example, while test statuses act as the target for you to refactor the codes (keep them all pass), it is 100% coverage that can help act as the target for you to refactor the tests (if coverage is dropped, it means your refactored tests miss out something). And the best payoff that I find is the Simple Design: even a single line of code not covered can reveal much insight about designs.
@75yado
@75yado Жыл бұрын
First test coverage is a number which should never reach the management. Second there is nothing like 100% test coverage - that means you have covered every single path, every single possible and impossible problem which could happen and even the ones that never happen which is unreal.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
100% line coverage is certainly possible, if you practice TDD, but not necessarily desirable as I say in the video. I'd consider high 80's or low 90's as normal for TDD. Covering every single path is not the same as every single use though.
@75yado
@75yado Жыл бұрын
@@ContinuousDelivery What is 100% test coverage? Does it mean every function has its test or every usecase of every function is tested? And I 100% agree with your approach to TDD. I was taugh that 100% is desirable but neither really achievable nor wise. That means another measurable value for the management is useless and I am afraid that with this tempo management would abolish agile totally as there would be no measurable values for them to control.
@hematogen50g
@hematogen50g Жыл бұрын
If you have a lot of tests maybe you need some tests for tests.
@tongobong1
@tongobong1 Жыл бұрын
You don't need tests for tests because tests should contain only trivial logic so no loops or conditions or complex calculations. Tests for trivial logic in production code are usually useless so you shouldn't write them.
@hrtmtbrng5968
@hrtmtbrng5968 Жыл бұрын
What is this? Unit Testing for Dummies?
@NathanHedglin
@NathanHedglin Жыл бұрын
Tests are bad. More code to maintain, run and refactor.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
Nope! If you don't test your code once you have written it, you are irresponsible. SO now all we are talking about is what's the most efficient way to test, and TDD is it, because it dramatically reduces bug count.
@tongobong1
@tongobong1 Жыл бұрын
Only bad tests are bad. It is better not to have bad tests. Good tests will help you develop much faster with fewer bugs and will give you the control over complexity of the code so you can create much more complex logic without getting into huge trouble working on it - extending it, refactoring, redesigning and fixing bugs.
Types Of Technical Debt And How To Manage Them
17:58
Continuous Delivery
Рет қаралды 54 М.
100% Code Coverage Is Useless
13:27
Web Dev Simplified
Рет қаралды 27 М.
Smart Sigma Kid #funny #sigma
00:33
CRAZY GREAPA
Рет қаралды 39 МЛН
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 135 МЛН
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,6 МЛН
I Bet You’re Overengineering Your Software
19:58
Continuous Delivery
Рет қаралды 24 М.
Don't Mock 3rd Party Code
19:56
Continuous Delivery
Рет қаралды 40 М.
What is Test Coverage?
15:57
Paul Gerrard - Testing Anything
Рет қаралды 10 М.
"Non-Functional Requirements" Are STUPID
15:10
Continuous Delivery
Рет қаралды 43 М.
PAINFUL Software Release Cycles Are NO JOKE
21:50
Continuous Delivery
Рет қаралды 15 М.
5 Reasons Your Automated Tests Fail
21:21
Continuous Delivery
Рет қаралды 18 М.
Platform Engineering Is The New Kid On The Block
23:21
Continuous Delivery
Рет қаралды 36 М.
Test Driven DESIGN - Step by Step
25:46
Continuous Delivery
Рет қаралды 20 М.
Measuring Software Delivery With DORA Metrics
19:22
Continuous Delivery
Рет қаралды 37 М.