In chip development I use: (a) each module has assertions, (b) each leaf module has unit testing designed to also check assertions are working, (c) each module composed on two sub modules has unit testing, (d) well defined busses have bus monitors / assertion checks similar to contract testing, (E) randomised testing occurs at various subsystems / product level to check for edge cases. In this way you have high control and visibility of each leaf module, and each subsystem, while randomised testing tries to introduce strange faults between the interaction of modules. Randomised testing frameworks themselves are tested to check they are reporting correctly.
@RenaudRwemalika11 ай бұрын
The video implies that this is a either or situation. We want to establish a test pyramid. A lot of very local unit tests, integration tests that will have a larger scope, but mock the environment (what was shown in the video). And finally once all that is passing e2e tests. Typically you see it in user acceptance testing to ensure that the flow of data/behavior of all the components act as we expected. These interactions cannot be captured by the other layers (e.g. we did not realize we need a circuit breaker in the architecture). Although , the higher we go in the pyramid, the less tests we should have. I would say what I really don't agree with the video is the assumption that all the requirements are perfectly defined, they are exhaustive and account for all the cases and are perfectly communicated to every team. Think of the crash of Ariane 5.
@animanaut11 ай бұрын
the "beauty" of e2e tests is how they allow for another funnel for changerequests *cough* i meant bugfixes very late in the process with appropriate time preassure to allow for rushed hacks *cough* i meant pragmatic and creative solutions ... its what dysfunctional projects then start calling "being agile" and they usually are pretty proudly presenting it around while the underlying techstack is slowly rotting away from ever increasing tech debt. if you hear some managers praise their team for the valiant effort to meet the deadline you know whats up. that team produced tech debt during afterhours, but hey ... that disgusting pizza was free at least
@johnfranklin814710 ай бұрын
I think you’ve missed the one big reason for E2E testing. In the “modern” object-oriented paradigm, the vast majority of developers (all?) have no clue what their bit of code actually does in the big picture. Sorry, but that’s true. They know what this method will do when called, but they have no picture of the circumstances under which it is called. The concept of “typical flow”, or call-stack as we used to call it, is just alien to them. Therefore, literally the first time they’ve ever seen their bit of code execute in typical flow, is in E2E testing. Now, under all circumstances, the code *will* be E2E-tested. By end-user in the production environment. The only question is, whether the devs have done so prior to that. What the customer says “hey, it’s taking fourteen seconds for the communication beam to Hand Over, that’s crazy”, what do you say to that as a dev: “That’s interesting, it’s news to me”? Is that the best you can do? Seriously, you can’t even use the hoary old chestnut “yes it’s supposed to do that”, because you have literally no idea. It could have been 1400 seconds, and you would be none the wiser. That’s why you do E2E testing.
@D4no0010 ай бұрын
I think it was clearly stated at the beginning that such teams/organizations are dysfunctional and cannot deliver software reliably and fast, so them wasting more time on writing these hard to maintain tests doesn't matter much at this point. I totally agree with this and have personally seen budgets being sunk for QA, because developers were encouraged not to write tests, the case with E2E is not that much different.
@johnfranklin814710 ай бұрын
@@D4no00 And yet….read through all the other responses on this video. I see dozens of responses talking about interfaces, edge-case testing, classic bug sources (eg memory-safety, thread-safety etc). And not a single response has *any interest at all* in what we used to call validation (does this software do what it needs to, rather than what an individual thought it should, ie unknown system-level requirements), or performance. There’s loads of people talking about how “they” (whoever that it is) failed to requirements capture, and the naughty stupid customer who imposed all the change-requests. And not a single person taking responsibility about how the project team could fix this, to produce an economically useful project. By your measure, every single dev on this topic, just happens to work for a dysfunctional team. I disagree. Not dysfunctional team, but dysfunctional methodology. . E2E testing is part of the checks and balances that help mitigate the disaster.
@D4no0010 ай бұрын
@@johnfranklin8147 Classic bug sources you mention are result of using a badly designed language or abstractions that are not on the level for your business requirements. I hear stories even to this day about people using barebones threads in their java projects and implementing their own cavemen concurrency on top of them. I would not take it close to the heart what other comments are about on this video, as I am more than sure that 95% of people that commented here, never did TDD/BDD in their life. As for the cases you mention E2E testing, this is the same approach as defensive programming, but in testing and actually reflects poor software design. How to fix such projects is beyond this discussion as this is a strictly organizational issue. One of the realities is that nobody nowadays cares, do the strict minimal to stay afloat and collect the paycheck at the end of the month.
@arnoldhau111 ай бұрын
This is exactly what IS an e2e Test? Of course you mock all external sytems. Of course you set to a defined state. All e2e Tests I have ever written do of course mock all external systems. Usually that is quite simple, as you just require very specific interfaces and cases (so you can hard-code a request-response, even an SAP may become a very simple almost hardcoded REST or SOAP mock). Of course we would be better of not even e2e testing at all, just testing each module by itself. But that requires a very high maturity I have yet to see in a real life project. I am sure those exist, just I have not seen one yet. To most, releasing without manal tests is still basically black magic or crazy. I am very happy if I can establish way we can at least test everything automatically.
@allanwind29511 ай бұрын
If system A can't send garbage to the system under test B then should you test for it? "Should" as in waste of time & money. The only valid answer is derived from the specification (document) and as you probably don't have one, ask the stakeholder, or if you have sufficient domain knowledge decide if you want to assume non-valid input or check that particular precondition of which there can be a great many of. If you expect a floating point, say, temperature, do you want to check for NaN? What about
@l_combo11 ай бұрын
This gets even worse if releases are batches up into quarterly releases. Many financial services back end organizations work like this e.g. between the front office into financial ledgers etc.
@logiciananimal11 ай бұрын
For businesses that produce software as a "way to do something else" I have begun to guess there is a problem related to two notions of production. The BOs want to align production of the business product with that of the internally built (or aquried) software. Yet this is likely a problem; they do have to align to some degree, but if they are strongly coupled one gets the weirdness of "we are not releasing product A again for X months, therefore the application for A is code-frozen for about that period, therefore we cannot work on it." (And then, in my case, as an application security specialist, "we can't fix the security problems you found until then.")
@GDScriptDude11 ай бұрын
If the interfaces and communication protocols between the systems is fully specified then the (tested?) Mocks of these connection points should eliminate the need for a chain of E2E testing I think.
@Sande2411 ай бұрын
What if one of the systems changes? How can you know that changes in the specifications would be backward compatible? Or if they aren't, how do you change the version without any serious issues in live? Code is constantly changing and static mocks are always getting out of date.
@TalkingBit11 ай бұрын
@@Sande24 That's the whole point of contract testing. You need to tests systems against a contract (interfaces and communication protocols). If a system needs a new contract (such a new version) you need to add the new contract into testing. for all involved systems,
@Sande2411 ай бұрын
@@TalkingBit Yeah, contract testing is a nice thing to have. The previous post just mentioned basic mocking and that there would be 0 need for e2e tests. Even with contract testing, your individual contract tests might not catch all the kinds of mistakes that an e2e test might catch if it runs through 3-4 different services. outputs from one service turning into inputs from another, some side effect events happening on the side, timing issues with those etc... Unit tests passing does not mean that integration tests will pass. Similarly integration tests passing does not mean that e2e test might pass.
@D4no0010 ай бұрын
@@Sande24 what you are describing is a symptom of a poorly designed system and using E2E tests as the last resort. Contract testing is more than enough if done correctly, because if your local tests pass but E2E don't then it is clearly that the contract you were testing is invalid, contract updates that are not backward compatible should be announced and ideally versioned, so that you have to specifically migrate to the new contract without having downtime on the production system. Failure points should also be taken into consideration, timeouts, network errors and tested locally especially if your system is not capable of recovery from errors.
@johnfranklin814710 ай бұрын
@@D4no00 And, on the one hand, the dev team doesn’t have any responsibilities for planning, task-estimation, schedules, project documentation, or code comments because they are all genius creatives in agile flow, for whom “the code is the documentation”. But also, any bugs that aren’t caught by looking at a single module, are “poor design by Somebody Else”. Or “Stupid Customer Specification”.