I loved the 2013 talk. It was my TDD lightbulb moment. Just as that talk helped me, so this has too. A lot of discussion around TDD only serves to add confusion and some of it carries too much emotion; Ian has a great way of providing clarity in a calm and reasoned manner.
@pepijnkrijnsen42 жыл бұрын
I've listened to the 2013 talk several times and learned a ton, despite it being somewhat unstructured and angry. This one is an order of magnitude better. I've been making my way through TDD by Example, coding along, and the perceived "issues" with TDD touched on in this talk are such an enrichment to the book. Thanks a mil Ian!
@petermckeown87423 жыл бұрын
Valuable insights on using TDD effectively in real projects. Thanks Ian, excellent talk.
@michaelslattery30502 жыл бұрын
To paraphrase: test your controller logic (input ports), and mock your I/O (output ports, DAOs). You don't need to test the internals.
@EmmanuelBLONVIA Жыл бұрын
Nope. Rather test the useCase, Not the controller (ref. Clean Architecture). Check what he says at 57:00 But you would rather start at 53:45.
@michaelslattery3050 Жыл бұрын
@@EmmanuelBLONVIA I agree, but when I say it precisely as you have, people don't get it.
@michaelpesin9463 жыл бұрын
Great talk, this settled a lot of issues I'v encountered trying to get into TDD (and proving that this is important to my boss), and listening to a lot of other opinions. I am reading "TDD by example", and this talk organized several issues I had while reading through the examples, summarizing the main points.
@SlowAside5 Жыл бұрын
I agree with everything he said, except for some of his comments on BDD. I don’t have any issues with regular expressions when working with Gherkin and SpecFlow. While it’s true that the product owner doesn’t write the Gherkin (that tends to be done by the programmer and the QA), the English DSL is valuable because the product owner can review it after we’ve written it (or even as we’re writing it). The QA usually isn’t technical either so they benefit as well. I also like how the step definitions are reusable across multiple scenarios. This might be achievable with xUnit frameworks but with SpecFlow at least you get it out of the box.
@MartinRudat11 ай бұрын
Hmm. If I understand it correctly, to describe it in terms of semantic versioning, every time you add new tests you increase the minor revision because the API can now do more things than it could before, if you change or remove existing tests you bump the major version because the new API is no longer compatible with the old one, and if you don't change any tests, you bump the teeny version because you improved the internals in some way, but the API can't have changed.
@bmarvinb2 жыл бұрын
Great talk
@testingcoder2 жыл бұрын
What is value in breaking this into two different steps? I.e. why not to write a properly clean code in the first place?
@pepijnkrijnsen42 жыл бұрын
Kent Beck answered this 20 years ago. In short - for many problems, you don't know the clean code you should write unless you hack a quick & dirty solution first. But for the proper answer, just read his book. It's not that long. It works really great for some people (like Beck, obviously) and not at all for others. It's not a magic potion and if you don't like this way of working, or if you don't find it benefits you, just use something else.
@testingcoder2 жыл бұрын
@@pepijnkrijnsen4 really appreciate your comment! I think I got this book somewhere. Unfortunately, I have a really huge list to read so I don't know when I am going to read this one. The fact that its short may put it up a bit
@grokitall5 ай бұрын
just do it right the first time is the best approach, but only when you have precise knowledge of the inputs, outputs, and the transforms you need to use in order to get from one to the other, and what other constraints like speed and memory usage apply. unfortunately while this may work for things like low level drivers, over 60% of projects don't know this stuff until you try and write it, and find out it was exactly what the customer did not want. tdd and ci work by having the developer write functional unit and integration regression tests to determine if the developer wrote what they thought was needed. acceptance tests are typically written when it is presented to the customer to confirm that the developer actually had a clue what the customer requirements were, and as the customer does not know this on 60% of projects, it cannot be done at tdd and ci time. instead, they are written when the customer is available, and then executed as part of the deployment pipeline which checks that it provides both the functional and non functional aspects needed by the customer. a few of these tests can be added to the ci system, but mostly they do a different job. tdd creates relatively clean testable code with tests that you know can fail. ci takes these tests and highlights any code which no longer meets the executable specification provided by the tests and thus does not do what was needed. cd and acceptance tests deal with how it works, is it fast enough, uses a small enough set of resources, anything which should stop you deploying the code. monitoring and chaos engineering checks things that can only be discovered in production, ie how does it scale under load.
@misterbuu6663 жыл бұрын
Great talk.
@jensappelmans34273 жыл бұрын
Awesome, thanks Ian!
@savbace2 жыл бұрын
Brilliant!
@BryonLape2 жыл бұрын
I can hear the Release manager now..."Your code doesn't work"...."but my tests pass"...."You are missing database updates"....
@michelmdf2 жыл бұрын
awesome
@testingcoder2 жыл бұрын
My thoughts on it: 1) I agree with the assessment on the London school of TDD. Even though it's officially TDD there's not much of a "Test-drive" element. It essentially a bit improved test-last. It doesn't necessarily mean it is bad, but the side effect is there - tight coupling of tests and tested code. 2) Refactoring slide (kzbin.info/www/bejne/rICyZJuroa6Wqbs) describes a possible TDD process when you quote "copy code from stack overflow to make test pass". And then you do refactor because the code you got was rubbish. I don't get why not aim for clean code in the first place? What is the value of breaking this into two different steps? 3) "You stop doing design up front" kzbin.info/www/bejne/rICyZJuroa6Wqbs What is the value in this though? 4) On BDD, very good point and also relates to one of my recent posts: kzbin.info/www/bejne/rICyZJuroa6Wqbs 5) Very nice point on Gherkin. Gherkin was a nice idea of helping customers write acceptance tests. Unfortunately it never works so it's just another and unnecessary layer of complexity kzbin.info/www/bejne/rICyZJuroa6Wqbs 6) Ian saying "writing test-last is classical testing" kzbin.info/www/bejne/rICyZJuroa6Wqbs. Unfortunately it's not correct. Writing automated tests is not testing 7) "By using test last you essentially saying I prefer waterfall". Hah, that's a bold statement! Waterfall isn't inherently bad. And even if it were, I don't think this statement is correct kzbin.info/www/bejne/rICyZJuroa6Wqbs 8) "The problem is long feedback loop" - nah, test last does not have to have long feedback loop (of course that depends on what you call long) kzbin.info/www/bejne/rICyZJuroa6Wqbs 9) "Not all code should be written by TDD". Indeed. 10) "Write test in response to requirement" - I don't think there many requirements which could be "covered" or represented by one test kzbin.info/www/bejne/rICyZJuroa6Wqbs
@testingcoder2 жыл бұрын
Who would expect Ian criticising London school of TDD at NDC London? 😆
@grokitall5 ай бұрын
pure waterfall has been known to be bad since the 1970s, because it moves all the feedback to the end where it provides the least value. the only solution to this is to write easily testable code, and the only way to ensure this is to write the functional tests first. then you see it fail, then pass, and then if you refactor you see that it does not depend on internals. people are rubbish at writing functional tests, which makes them dislike testing when their tests which might not even be functional and might depend on internals fail, or are fragile. functional regression tests are the only way to do ci, and not doing it means you don't care how bad your code is, how little refactoring you do and thus how little you care about technical debt. doing it requires you to write testable code, which eliminates a lot of code smells and bad practices, including the need for the code to be heavily infested with mocks. the future of coding requires you to write better software faster, which requires ci, and some level of cd for your non functional tests.
@jordanwalker707611 ай бұрын
This guy never puts any code samples in his talks...
@fennecbesixdouze17942 жыл бұрын
There's really not much to say here, and it is definitely is not a problem with mocks. Yeah, if you write bad code, then it's bad. Tests are code. If you wright tests that are horribly coupled then they will be horribly coupled. This is the story as old as time: someone tries to do Test Driven Development, but doesn't understand that the test code is also code and therefore needs to be designed well for all the same reasons as the code.