❗ TEAM TRAINING: Our Team Training Offer combines practical exercises and workshops with Dave Farley, designed to enhance team skill development and achieve real improvements to working practices. Our courses are designed to encourage people to learn together, share ideas, and overcome the challenges of new ways of working. FIND OUT MORE ABOUT TEAM TRAINING HERE ➡ courses.cd.training/pages/buy-for-my-team
@Rope25714 күн бұрын
@@ContinuousDelivery How would you handle testing an application that mostly functions as a proxy to other services? I.e. it merely aggregates data from other services so it has no domain logic to speak of. My colleague is proposing we only use integration testing because he feels the unit tests wouldn't add anything of worth. Since even if we would add unit-tests the difference in execution time would be small because there isn't enough to test. Personally, I'm on the fence with this one. Mostly because I cannot argue for using unit-tests due to the lack of domain logic, small amount of tests the service will need, and the relatively low execution-time in both cases (no more than a few seconds).
@logiciananimal8 күн бұрын
@@Rope257 The aggregation process can itself be tested. As a pentester I get asked: But that has no UI, how are you going to test it? By simulating the system on the outside, the next hop over.
@rothbardfreedom14 күн бұрын
An Engineering Room about testing with James Bach would be a blast. He is releasing a book on Rapid Software Testing this year.
@Marck112213 күн бұрын
This channel doesn't want 'manual' testing 😅 - James Bach will start swearing
@timmartin32512 күн бұрын
Wow I'd thumb this comment up 100 times if I could. James Bach has a super interesting take on software testing, certainly worth having a read/watch if you are interested in building software.
@BrandonToone14 күн бұрын
Great video! Using acceptance tests to drive unit tests has worked well for me for a long time now. Could you expand more on how to effectively use approval tests, particularly in the context of avoiding the common mistake of adding tests after writing the code?
@samvarcoe14 күн бұрын
This was a really good video, thanks Dave. You've talked about all of these things before but it was definitely worth going over them again, this was really well packaged and presented.
@ContinuousDelivery13 күн бұрын
Glad you enjoyed it!
@veenone13 күн бұрын
Really love your thorough explanation! This will be a very good material for providing proper understanding of test activities both to the team and management who don't grasp the understanding of test activities properly
@ContinuousDelivery13 күн бұрын
Glad it was helpful!
@chakala214912 күн бұрын
Hi Trisha and Dave, when people are discussing testing I rarely hear about ISTQB which is a good body of knowledge regarding testing. I would be glad to hear your take on it 😀
@kampanartsaardarewut42214 күн бұрын
feels like you read my mind. I have been using acceptance test without integration tests for years when developing web services api
@rasmichael13 күн бұрын
Manual testing has its place, certainly if you are developing anything that will be subjected to humans in production. If you don't let humans test then the customer will do that for you. With predictable outcome...
@rtothec123415 күн бұрын
Wait, You guys are testing?!
@cloudconsultant14 күн бұрын
😂 testing = the happy path works most of the time
@DavidParry-Hacker15 күн бұрын
This rocked and loved how you hit on the 2 most important tests at the end with solid reasoning.
@oliverglier860514 күн бұрын
Thank you for this nice overview and explanation. I also like healthchecks which perform expensive consistency tests for stored data and program behavior. A great deal of problems can be discoved befor currupted data spreads throughout the system. Secondly, I am fond of defensive programming by assertions and contracts. Contracts and assertions add so much clarity to the code. They allow errors to be discovered after release, and they allow (some) integration tests to become more like end-to-end tests. They are a prestep to provable correctness, and they bring their own complexity challenges (sometimes an assertion simply cannot afford a proportional amount of time).
@sirskaro358114 күн бұрын
Dave, this was an excellent video and very encouraging to me because it has very similar content to what I preach almost daily at my work. I've been labeled as the automated testing expert in my department - which I feel like is being labeled as an expert in breathing. I really like your list and plan on updating my testing evangelist slide decks with your ideas. One thing I'd like point out that I often start with in my slides is that I feel there are many ways to define what a "type" of test is. E.g., black box v. white box are types by methodology while Marick's quadrants are types based on function and yours are based off purpose. Perhaps I'm off course about this but I'm interested to hear your thoughts about this idea. I've been experimenting lately with in-memory test doubles as part of my acceptance tests. So far I've found that they give me lots more confidence even though it doesn't add much code coverage and doesn't really exercise more of my code than if I use stubs. I'd be interested to hear more of what you think about heavier use of test doubles.
@ContinuousDelivery13 күн бұрын
For Acceptance Testing I tend to use "Test Doubles" at the edges of the system under test, where it breaks out and interacts with external systems, systems that are owned, developed, deployed & usually tested, by other people outside of our team. Then we can fake those interactions. I go into a lot of detail on this in my paid-for training on ATDD, courses.cd.training/pages/acceptance-testing but there is also this free video... kzbin.info/www/bejne/h3emeYZ7fcykfKc
@logiciananimal15 күн бұрын
Pentester etc. here - I describe my work in application layer pentesting as "acceptance testing in reverse". I guess this is sort of like how antimatter is matter in a broad sense. Instead of looking to see that it does what it should I focus on whether or not an application (or API, etc.) does what it shouldn't. I thus agree largely with this taxonomy. I would like, however, to see if TDD reduces confirmation bias. Done right - without focus on implementation etc. I would hazard a guess that it does. I mention this because I have long wondered whether or not unit testing (however useful) might provoke confirmation bias.
@KimballRobinson9 күн бұрын
I think the scope of this video is *automated* testing types. But you are right, we do a lot of other quality-related activities. Code reviews are a maintainability and understandability test, along with other purposes. UX testing is not really automated, nor is penetration testing (but they use tools that amplify/assist). You might talk about architecture review, standards compliance, and system level rapid software testing - often done to mitigate what you describe as confirmation bias. Nobody is immune to confirmation bias, though. Penetration testers tend to start from a different mindset to see certain *types* of bias more quickly, but they probably aren't focused on things like UX and supportability. You're not likely to report "these logs didn't tell me enough info to diagnose an app support issue" because you are asking if the logs gave you information to hack the system - you might be biased to recommend no logs, which can work against supportability which is valuable to business.
@logiciananimal8 күн бұрын
@@KimballRobinson I agree that everyone suffers from it. Those other practices can help, but for example, I get some weird stares when I open an education session (which I also do) with a slide that says "Design for failure". People do not want to think that their design (or archictecture - I agree there is no clear dividing line here) is going to have weaknesses. As for the other values, I don't look at them except in so far as they affect security. However, I have thought (and discussed with others here) how to perhaps use semgrep for both security code analysis and accessibility, for example. There is an opportunity for common principles and ideas all the same. As for no logs, categorically not - we will always want logs for incident response, etc. However, the game is deciding what to log, where to store, when to review, and by whom etc.
@AFPinerosGКүн бұрын
Hello Dave, there is a growing trend in "isolated testing" in which it is recognized that having to deploy a full environment for complex systems, such as microservices, isn't ideal because it is expensive, slow or becomes a bottle neck. This trend usually suggests avoiding these "fully setup" shared environments. How does acceptance tests match this idea? Do acceptance tests need a full-blown production-like environment to run? For example, if I have 10 applications getting deployed to this shared environment in parallel, are all of them tested at the same time? Do I need to deploy a partial environment for each of these 10 applications so they can simulate their full system tests? I think the environment setup has become a big challenge for testing in the current architectures. I'd love to hear your advice on how to properly run acceptance tests in microservice environments. There's willingness to start working on these tests, but these concerns become a deterrent when you have a system with 40 microservices.
@chat-197813 күн бұрын
From my experience * unit testing offers the most cake value when later someone modifies code. I've never understood or mastered TDD because it feels that it works against agility and requires a lot of planning. * A different/better way of understanding acceptance tests. Interesting remarks as well. Usually it's the customer who signs off but i like this one more. * From my experience the only automated tests that matter are unit and smoke tests. Smoke tests cover a lot of other tests including performance. I also like stress tests to validate and measure the tolerances of the architecture. * Most niche products can't do automation tests unfortunately. * I've always valued the idea that qa is not done by devs. Certainly for manual testing. With automation it gets a bit blurry because now the tester thinks like a programmer and brings those biases in, while the qa represents the moon power user.
@KimballRobinson9 күн бұрын
As I've worked with my team on TDD practices, it's been a struggle because we've had to learn/mentor on better design skills, starting with coupling/cohesion/distance and moving up to incremental design toward patterns, restructuring, and keeping multiple designs in our heads moving toward a target code structure. Some of my TDD attempts failed because of poor prior design that was difficult to mitigate quickly. Yet as I have worked to help my team and myself grow design/arch skills, we've started succeeding at it more. So, I would say that the degree to which your team CAN do TDD is a measure of their design skill maturity, and not to start by doing TDD. Instead, ask how to make modules more testable, and talk about test scope with ticket refinement & pointing poker.
@chat-19788 күн бұрын
@@KimballRobinson Excellent points. Earlier in my carreer, I did develop reusable libraries. I wasn''t mature enough then to think like that. They were usually the byproduct of our needs, e.g. make multithreading easy for juniors. My counter argument is that if design is necessary, then that is a counter intuitive to agility. What stops you from pushing the design first concept from code to everything else?
@chakala214912 күн бұрын
But Dave aren't integration tests useful for testing integration points such as gateway classes and repositories? I always think about Integration Tests as testing my ''understanding and usage'' of external dependencies.
@FlaviusAspra12 күн бұрын
Unit test is best done at the boundary of the domain model, as to not tie the tests to implementation details of the model.
@enzoscardamaglia9565Күн бұрын
Thank you very much. This really helped me a lot. I've a question though, as a non native speaker: why do you use the term "acceptance" and not the term "end to end"?
@ContinuousDeliveryКүн бұрын
End to end is a confusing term, where are the "ends"? In large enterprises people often take this to mean putting software into a global simulation of "production" including all sorts of systems, often unrelated to your system. This is a very poor way to test things, because you don't have enough control to test them very well. Acceptance testing, as a name focuses on the outcome rather than the mechanism, we test enough to be able to "Accept" the releasability of our software, and we do what is necessary to achieve that.
@enzoscardamaglia9565Күн бұрын
@@ContinuousDelivery Very good! Thank you once again 🙂
@travisabrahamson886414 күн бұрын
One way I use Integration Tests is to learn about external to the app dependencies. Such as an Rest API. Here I can add tests to assure that when my app calls that API I get an expected response from the API. It helps isolate and quickly troubleshoot when a response I'm expecting does not occur from that API.
@KimballRobinson9 күн бұрын
Excellent video. I will share a couple additional conclusions of my own, that I've evolved/refined in my work... The use of words *unit* and *integration* tend to cause confusion and wasteful, circular conversations without helping design (current/future work). They do assist with referencing prior work & build stages (past work). Why are the words so confusing and so debated? It's because they are intrinsically vague. If you look at a dictionary, a unit is a "component" that MAY be made up of smaller components. Definitions of "Integration" amount to "combining stuff". We'd get funny looks if we said "let's combine stuff to test it". We need to talk about *what* we combine, when, and why--in other words to talk design. Two engineers in the same room will have 5 different conflated ideas of what "integration" means, and waste time getting confused and clarifying (our imagined scope will vary based on recent conversations/code). The more I pay attention, the less I think it's possible to use the word "integration" meaningfully (or any other broad label honestly). So, the solution is: if someone says "integration tests", ask questions. I've found on my team that asking about test scope clarifies things quickly - with 4 aspects of scope: test handlers/capabilities, functionality targeted, how we verify, and what we mock. If I ask "what is test scope" I get shrugs and scoffs, but when I ask more targeted questions in refinement, *we end up designing*. And that is the main point: in design/implementation conversations, ask questions and never let the vague words pass by. The words unit and integration are useful though, in retrospect: they often refer to "all those tests from prior work" that run locally without much setup (unit), or in the CICD with more dependencies set up (integration).
@KimballRobinson9 күн бұрын
I have a blog post about this somewhere, and I've worked hard to express the ideas above concisely, with many people. But I am happy to improve the ideas, or dive into more detail to defend them. Consider: is every integration test combining all of { db, api, kafka, file store, config }? Typically I write integration tests that target only one or two container dependencies at a time, because setup gets complex and tests become fragile otherwise. The integration tests typically have overlap with each other and unit tests; This overlap helps to cross check mocks with realities (as mocks can create compatibility illusions - they represent a guess/contract that may be violated or hidden). So I might have unit tests targeting modules A & B and their functions, and integration tests targeting AB + db, AB + kafka, A+API, and B+config. You can't define integration or unit by how many lines of code there are. A single line of code may include a complex graph-ql query, or just a math library call. And even the simplest code is combined (integrated) with the runtime in order to execute.
@aurinator11 күн бұрын
Types of testing: Unit Tests 2:12 Acceptance Tests 5:50 - How they can be considered as synonymous-ish with Integration Tests from the Testing Pyramid 12:58 Approval Tests 13:44 Manual Tests 15:37
@riesigerriese66615 күн бұрын
Thank you, Dave. As usual very dry and straight to the point. I admire your patience explaining the same things again and again and again and again, tirelessly. I'm so fed up explaining to my team mates that unit tests are way better when written first. They say: yeah, I'll do it later. And then they complain that my tickets take too long. Cause I've got to write all the missing tests first, before I can make my change. 😮
@alienbacterium851815 күн бұрын
Thanks!
@timmartin32512 күн бұрын
TRIMS is a good mnemonic for value in test automation; Targeted at risk, reliable, informative, maintainable and speedy. It's very easy and common to create automated tests without much value that do not fulfill these criteria very well. The risk here is especially high when creating automated UI tests as UI's are notoriously slow and flaky in this context.
@subbamaggus113 күн бұрын
hey, i recently started a new job. i follow you for quite a while now. the tool they use is completely new to me. the setup is quite different to what i have seen before. it is a rich client application with a database in the back. the core is developed in smalltalk and is basically a black box. the part that is customizable will be done in a "basic" Derivate (vendors own implementation). and the code is stored in the database. since the tool will be used in regulatory environments, every change is tracked in the db. if you want to run the code it needs to be published. the ide and debugger is integrated in the application. (and not fun to work with) source code management is not really existing. auto completion code beautifying as well. what the team did to use some scm features is to export the routines stored in the db and checking it into a git repo. test framework? not existing. deployment to production? half manually. so i would like to start a test framework, small steps... would you have any recommendations where to start?
@ContinuousDelivery13 күн бұрын
I think that my top priority would to get some kind of version control in-place. Assuming you are back-fill testing, then I'd start with "Automated Acceptance Testing" as it is easier to retro-fit to pre-existing code. I have a video that talks about where to start with legacy systems... kzbin.info/www/bejne/lZjWZICgf6d4nNU
@subbamaggus113 күн бұрын
@@ContinuousDelivery thank you for your response! what is in place: export of that one table (that holds all the "basic" routines), export each line to a separate file. So for each "function" there is an export, which is automatically commited to a git. With that there is a kind of version control... unfortunately this is not done for all parts. and there are even parts that are not easy to handle. cost/benefit is not worth it for management. i will rewatch the video in that context! thank you for the suggestion about automated acceptance testing. I will try to see what is possible with the tool. You would be surprised how legacy that tool is ;-) giving the fact that they claim to be market leader ;-)
@subbamaggus113 күн бұрын
the first thought was: implement a "header" to generate a documentation "doxygen like" this was declined since it would effect all routines (and this was only in the comments!)
@MismaSilfver14 күн бұрын
Farley here presents his "pyramid" of: unit tests, acceptance tests and tactical tests. As an example of tactical tests are approval tests. They compare subsequent runs results to first run and visual testing would be part of that. But I somehow did not pick up fully what Farley means with tactical tests. Could someone elaborate on that for me?
@samvarcoe14 күн бұрын
I believe that the idea is that there are supplementary tests that you might want to do in addition to your automated acceptance tests to help catch problems earlier and improve the overall efficiency of your process. As an example, imagine you have a service in your test environment that is unstable and it regularly makes your acceptance test run results unusable. It might be a good idea to test that the service is up or that your system is properly integrated with it before running your full test suite even though the integration is already fully tested by your acceptance tests. That way you can alert on the problem early and save the expense of running the full test suite. These tests are 'tactical' because they don't give you additional confidence in your own work or the user facing behaviour but they do provide value by saving the team time / effort
@MismaSilfver13 күн бұрын
@@samvarcoe ah thank you for taking time to chime in, thank you ! Makes sense!
@SpecialK84514 күн бұрын
In theory - having no manual regression testing is possible and would be great. In reality - businesses rarely properly invest in automated testing, meaning the test automation goes in to a backlog rather then being developed before the features are implemented like suggested with acceptance tests. Also manual regression testing is repeatable. That’s why there are manual test cases created and the important ones are placed in to regression suite once tested in a sprint.
@oliverglier860514 күн бұрын
Hi, I think you are right but wouldn't you agree there should be professional standards which should not need being discussed with other stakeholders? One of the biggest problem of our industry, I think, are programmers failing to speak up for themselves.
@SpecialK84514 күн бұрын
@@oliverglier8605 Agree 100%. Good communication and dev skills is a combo in today’s tech industry!
@ContinuousDelivery13 күн бұрын
They aren't as repeatable as automated tests, not even close. Humans have off-days, forget things, and get distracted all the time. One of the most expensive software failures of all time, the Knight Capital disaster, where they mis-traded $2.5 billion in shares in 45 minutes and went bankrupt as a result of the poor repeatability of humans.
@SpecialK84513 күн бұрын
@@ContinuousDelivery I agree with you that manual testing isn’t a fail safe method - but I’d say it depends on the software. I see where you’re coming from for a trading application, agree that it doesn’t make sense to have manual testing cos of the amount of variability a human can’t cover. My point is that 100% automated regression test solution doesn’t apply to every piece software, for example a web app, it won’t be sustainable or reliable due to ongoing changes in the UI. Another recent example is Crowdstrike - if their release process involved one minutes of manual testing (by booting up a windows machine with the new patch), they would have caught that issue. That decision costs ~$5.4B overall for all business (according to Guardian).
@KimballRobinson9 күн бұрын
If you are asking for permission to write tests, I suggest you're going to fail every time. Instead, quietly write some automations--just a bit with every ticket. Then, go back and start showing how the tests are saving time. Years ago I read blogs where people talked about how relaxed they were at release time when they started automating tests, while the team was playing whack-a-mole. I tried it, and the experience was immediate. I had one bug come back on that project and fixed it quickly, and was bored looking for stuff to do. So, stop asking for permission to do things that make the team more efficient next week.
@petropzqi15 күн бұрын
Well put together, thx
@d3stinYwOw15 күн бұрын
What do you think about simulation testing, one of the more novel approaches to testing, spearheaded by TigerBeetle and Turso?
@weftw1se14 күн бұрын
Fuzzing has been around for decades in the hardware and embedded systems world. At best they introduced the technique to databases.
@d3stinYwOw14 күн бұрын
@@weftw1se It's not exactly the same as fuzzing, because here your environment is strictly controlled and deterministic.
@weftw1se14 күн бұрын
@@d3stinYwOw you are still just describing fuzzing. Maybe a terrible implementation of fuzzing would not allow you to specify a seed value but every fuzzer I have worked with has allowed this.
@ContinuousDelivery13 күн бұрын
I think all good testing is a form of "simulation testing" 😉
@d3stinYwOw13 күн бұрын
@@ContinuousDelivery In some kind, yes, but does it put your system under simulation of a real world workloads? I highly recommend checking out tigerbeetle simulation page, great stuff :)
@simonk18445 күн бұрын
I wish you had provided definitions of the tests you are talking about. For example, "integration test" can mean different things in different organisations. And missing these definitions, I sadly found this presentation not useful. Which is a shame as I'm really struggling right now with the testing approach in my latest customer's environment.
@JayBazuzi14 күн бұрын
Static Analysis is an important category that I want to include in your list of test types.
@ruslanfadeev31133 күн бұрын
It's worth mentioning, but in this model I think it falls under unit testing
@JorgeFlores-v4i8 күн бұрын
There are many things not covered and leads the project into chaos.
@kriz565211 күн бұрын
Testing is for bad programmers!! Hey Dave show us your code i think you never write a complete working software by you own
@ytano578214 күн бұрын
Testing is only a fancy word for compiling.
@ContinuousDelivery13 күн бұрын
I suppose that depends what you mean, if you mean testing should be as widely used as 'compiling' then I agree, but if you mean 'compiling is enough' then clearly not. I can easily write code that does the wrong things, but that still compiles.