I was under the impression Nick already had a video about this and spent an hour looking for this like a month ago! Typically I'd maybe agree with other comments - it's a bit of an overkill. But when you face a situation where you need it - it's amazing.
@rimailias8 күн бұрын
I remember using something similar a while back when we had a massive amount of APIs to migrate from a legacy version to a "modern" new version, we used this to ensure that our migration didn't break any APIs.
@greghanson70478 күн бұрын
Seems like a solution that makes the test less readable to a problem that doesn't exist.
@x0rld1597 күн бұрын
for golden master it's really useful for example
@szymonmaczak76027 күн бұрын
And makes the test more tied to implementation - that makes refactoring problematic..
@simonwood24487 күн бұрын
Agree, absolute nonsense, you still have to declare your expected output, except now it's hidden in a text file and you can't see what it's supposed to be within the body of the test itself. Great
@gileee6 күн бұрын
@@simonwood2448 It's so you can change it without rebuilding the app.
@tymurgubayev48405 күн бұрын
if your expected output is "42", don't use Verify. If it's a text file with at least a few lines (or can be represented as such), give it a try. Maybe it is the right tool for the job, maybe it isn't.
@marikselazemaj34289 күн бұрын
Snapshot tests are great for APIs to ensure no breaking changes.
@ruslan_yefimov9 күн бұрын
These comments are actually useful, damn
@tarquin1612348 күн бұрын
o-m-g that just gave me a rush of excitement at how useful that would be
@qj0n8 күн бұрын
@@marikselazemaj3428 tbh, in case of api I would consider contract testing instead
@andreaslassak21118 күн бұрын
No. You should write just normal API tests will full coverage.
@SinaSoltani-tf8zo8 күн бұрын
How is this different than the normal tests in codes?
@michaelwinick9 күн бұрын
Years ago I built a "golden-ticket" concept in my testing, with a json snapshot of a successfully generated/saved record. Keeping track of the golden-tickets became an accounting headache. I love this package to handle that dirty work for me. Great demonstration, Nick! Downloading now.
@logank.709 күн бұрын
I feel like I'm missing something about this approach. The feeling I'm getting is that what this package does, at a high level, is just move what you are asserting from source code and puts it into a text file. This package feels like something I should fine useful but, after watching the video a couple times, I'm not seeing the problem that this solves. What could I be missing?
@roeib8 күн бұрын
You don't need to validate the entire output model with code, instead you just compare it to the last snapshot. The result is time savings with the initial test code and additional code that should come after any changes to the output model, is also an automatic verification that the model has not changed. Another advantage are the packages such as comparing pdf or image files
8 күн бұрын
It speeds up writing tests and also (for some scenarios) forces you to test the complete output. The http test is the perfect example. You are probably not testing for correct Accept header but you probably should.
@pawegruba47198 күн бұрын
Imagine your API returns huge, complex model with 60 properties, and you have a couple of tests for insert, for update, and get scenarios - instead of creating 'expected' models and asserting thousands of properties - this lib does it for you. What's more, imagine that you have added new property to your result model, and again, instead of updating multiple 'expected' models, this tool automatically 'detect' new value in the result and warn you about the change (by git diff)
@xcastor35 күн бұрын
I run into another issue sometimes. Let's say you have the 'result' and want to compare it with the 'expected'. It can happen that if you change a property, the test will still pass because the property changed for both 'result' and 'expected' value. This means that the test passes because, the 'result' object is incorrect, but it's still equal to the 'expected'. So 'Verify' is super useful in this case, because you can ensure that (specially in API endpoint) you can be 100% sure that nothing changed.
@fredrikjosefsson33735 күн бұрын
@@pawegruba4719 I wrote a test with a ShouldSatisfyAllConditions, and the conditions where over 100 rows long. with verify (which we didnt use then but do now) it would have been one row. My only issue with verify is that its hard to catch some tests that will fail because you forgot to sort a list so sometimes the tests will fail after it has already been merged with the project branch and someone else will run into it later. Another issue ive had is when I have been testing a DTO which also has a json in it. Datetimes will on azure devops use the american format meanwhile on my local it will use the rest of the world format. so the json fails. We still want to support different formats so the only solution ive found is to just ignore that data
@haydensprogramming67669 күн бұрын
I find that snapshot testing to be incredible for integration tests, as you probably want to have more coverage of what has changed. For unit tests, I find that snapshot tests loses context over traditional assertions as those assertions can provide greater meaning to why the logic is like that (e.g. countryCode.Should().NotBe("X") vs "countryCode": "y").
@marcg15159 күн бұрын
Your unit tests should be explicit not implicit. Which means if its this important that the countrycode is not something specific, there should be a test for it. The name of the test is more important
@ruslan_yefimov9 күн бұрын
Exactly! It makes sense to save a lot of your time with integration tests, but unit tests are only gonna get less strict and readable..
@DemoBytom9 күн бұрын
This package is also absolutely amazing for testing source generators, as well as EF Core generated queries. The first one is self explanatory I think, but the latter one also can have a big value - namely you can keep updating EF core and not be afraid of the unexpected query changes, that might need a closer retesting. Or you can easily get a generated query for a review, and acceptance by a DBA. Personally I wouldn't ditch all "traditional" unit tests in favor of snapshot testing, nor all assertions in favor of Verify, but there is a big space for that type of testing within the framework for sure.
@sergeyborisov34378 күн бұрын
Super helpful to verify images. Just need to add a threshold to pass on insignificant differences like timestamps, etc
@gneaknet9 күн бұрын
In my opinion this does stand in conflict with TDD when used for unit testing. In TDD the test is written before the implementation so when using this library the text file with the correct content has to be written from scratch which is not that convenient.
@gileee9 күн бұрын
TDD is bad anyway lol
@nerfzinet8 күн бұрын
Stop thinking so inside the box. Verify *enhances* TDD it doesn't conflict with it. There are at least two great approaches for TDD that use Verify: 1: Just write a simple Verify test and every time you make a change you run the test and see that your change did what you wanted it to. Now you're spending all your time working on the problem and the test just evolves with your work automatically, it's perfect. 2: Write a simple Verify test and run it. Your code won't do what you need so change something in the expected result and then write the code to make that thing happen. Now your code passes the test. Add a new thing to the expected result, write the code for it, code passes again. Repeat. Both of these approaches have all the benefits of TDD, but they're both faster and easier. 1 seems faster easier and better than 2 to me but maybe they both have merit.
@DryBones1118 күн бұрын
You shouldn't use snapshots for unit tests. They're best used for integration or end-to-end tests.
@simoncropp6 күн бұрын
@@DryBones111 snapshots is essentially a different approach to assertion. it can be used with any type of tests where you are currently using asserts
@DryBones1118 күн бұрын
Snapshot tests are great but they have a specific use case. They are great for testing the boundary interfaces of your system. Think API responses, Stdout on console apps etc. They help to both ensure breaking changes are recorded in the commit diff, and that the system is functionally identical from the user's perspective. They should be used for integration or end-to-end tests.
@M3lodicDeathmetal7 күн бұрын
Well obviously, who puts FileIO in unit tests?
@SinaSoltani-tf8zo9 күн бұрын
To me it feels like you only moved the Expected result from code to a file. Is there any other differences/advantages?
@ivanomatrisciano38289 күн бұрын
The expected result is generated by the library and you don't have to write code to assert each property
@SinaSoltani-tf8zo9 күн бұрын
@@ivanomatrisciano3828 Which means there isn't any thing "INSANE" about it. It only moved the expected result from the code to a file. By the way, you still have to accept every single generated JSON and if any values of those JSONs need to be edited in the custom editor, it also adds another extra step.
@krajekdev9 күн бұрын
@@SinaSoltani-tf8zoDiff tool opens up automatically, it is easier than you assume.
@lldadb6649 күн бұрын
@@SinaSoltani-tf8zo I was kind of wondering the same. For characterization tests, I currently write a test with an empty expectation expecting that to fail. But I then capture the output of the failure and put that into the expectation. I do want to give this more investigation though esp for things like scrubbing.
@nerfzinet8 күн бұрын
The advantage is that it simplifies a process that many of us do over and over and over. And it's not a small improvement, it's huge. Nick just wrote multiple fairly complex and thorough tests in a few minutes on screen.
@mkwpaul9 күн бұрын
Another very underrated way of testing is property based testing. Basically, instead of testing that a given input has a given result. Check that regardless of input the result conforms to general expectations. For example if I wrote a sorting algorithm, instead of doing this (simplified) var input = [8, 0, -20, 400] var result = sort(input) Assert.Equals(result, [-20, 0, 8, 400]) Instead, i could test that the sorted result has the same number of elements, or that every element in the input is in the result. Or that the elements are actually in ascending order. Which are all necessary properties of a working sorting algorithm. It's basically identifying the properties of whatever you're writing and then writing down those expecations as executable checks. Then you can generate as much random input as you want and verify that those things hold. The advantages are that its much less code, and more clearly communicates intentions. And its also more resilient to implementation changes. This approach does have some requirements, like your input needing to be pure data, so that it can be generated automatically and procedurally, and the functionality being pure. However I would want my code to look like that anyway regardless of if I used this testing approach or not.
@nickchapsas9 күн бұрын
A video on it is coming :D
@mkwpaul9 күн бұрын
@@nickchapsas :D
@MagicNumberArg9 күн бұрын
By any chance you picked it up from Clojure? There are VERY few scenarios when this makes sense, and most existing ones are for drivers.
@marikselazemaj34289 күн бұрын
To me, property-based tests are like auto-generated unit tests whereas simulation tests are like property base tests but for integration and e2e scenarios
@marikselazemaj34289 күн бұрын
Property-based tests are great for encoder-decoder or any function that has a reverse. You only have to guarantee that one of the functions is correct and the other is guaranteed by property-based tests.
@nickwong96499 күн бұрын
There are certainly specific good use cases for this type of test, esp. for verifying message body, object properties, generated output, etc. But I do not agree this is the only type of test we need. This is just an over exagerated title. Also, im not sure if there is something that make your test that slow (even not first run), or the lib that serializing the object into the file that makes it that slow.
@impero1015 күн бұрын
I once created a PR for Typescript, with some signature changes to the type definitions of certain array methods, and that part relied heavily on snapshot testing. It made me experiement with snapshot testing in other situations. While I wouldn't use it for unit testing, I've tried using it for integration testing with success, as well as for regression testing database schema migrations. I even created my own little testing tool for testing migrations for SQL Server, where it would retrieve the schema information from the test database, store the schema in the "verified" file (on for each migration step), and upon each test of an up- or down-migration, it would compare the retrieved schema, with the verified one, thus ensuring no non-verified changed had snuck into the database migration scripts.
@MajeureX9 күн бұрын
This would be very useful for writing Characterization tests since you often don't know what a output will be until after you run the test for the first time.
@gabrielheming8 күн бұрын
I see the advantages of using snapshot testing; however, your examples were replacing existing expected test results with snapshots, and the proposed scenarios create additional issues: If you lack an expected result, manually validating the data returned to the snapshot is error-prone. Additionally, if you first verify the output against an expected result and then replace your expectation with the snapshot, you are introducing an unnecessary and irrelevant step in the test iteration for the sake of a smaller test. On the other hand, snapshots are great where you don't have existent tests and the code that is being tested is fairly complex to predict (eg.: the code was written by another developer; it is outdated/messy; etc...). For instance, refactoring an existing code/function/service/endpoint/etc. This is one of the scenarios where snapshots surpass quite any other test methodology. Finally, adding snapshot tests solely for their own sake does not ensure that the code works as intended, only that the original behaviour does not change.
@tarquin1612348 күн бұрын
This is the best new thing I have come across in quite some time. Thanks Nick.
@CharlesBurnsPrime7 күн бұрын
I get a lot of value from these videos about excellent Nuget packages.
@nantang29766 күн бұрын
I don't think I will use this for my unit tests, maybe for certain integration tests? I do agree using verify will make the tests hard to read and debug if something fails. Often times, we use unit tests as a living document to demonstrate how an functionality is supposed to work. By using verify, it will make it harder for people to see for a given input, what would be the correct output.
@GBUKMilo7 күн бұрын
I think this is a really useful utility. How can you use these in automated testing? PROs - great for backfilling unit tests on legacy projects. No-one has time to code all of those. CONs - 1. You have to be 100% confident at the point of accepting the first response that it worked. Otherwise your false positive. 2. Developers that unit test first will not like this.. A question, why would you not want the responses in Git? You could then have a baseline pass for your team, and you can include these in automated testing.
@qj0n8 күн бұрын
I personally prefer calling it 'approval tests' as results are manually approved. Snapshots in context of tests sound more like system under test Snapshots anyway, I use it in some cases, Verify is a good tool to know, but it's not a go-to way of testing for everything
@kewqie8 күн бұрын
Is there no way to set the _verifySettings globally or per test project? Feels busy having to pass it on every call.
@svetoslav.hadzhiivanov6 күн бұрын
It seems useful only when upgrading a lot of nuget packages and you want to ensure you get the same output. It saves a bit of time, so you don't have to write out assertions manually, since all you care about is getting the same output, not what the output is
@lordmetzgermeister7 күн бұрын
For many dev teams this opens further options for integration testing as you don't have to dedicate resources to write the validating infrastructure. I'll definitely check a look.
@DoNotTouchJustLook14 күн бұрын
This is not a test. It's just a notification that "something changed". The only use case for them that I've found is if you're refactoring a part of code that's not covered by tests. Then, you 1. Create a snapshot at the level you'll refactor 2. Run it to generate the base line 3. Do your refactor 4. Verify you haven't broken anything by checking against the snapshot 5. Commit the change 6. Delete the snapshot
@krccmsitp28845 күн бұрын
We currently use Verify in two programs to test the results of a calculation pipeline, which otherwise would be tedious.
@troncek9 күн бұрын
Yeah, we are using this and it's nice. But there are way too many txt files after a while. Becomes kinda ugly.
@simoncropp6 күн бұрын
in the IDE, the snapshot files actually get nested under the test file that generated them. and they collapse by default. so they dont clutter your dev experience
@Hidacal9 күн бұрын
This looks awful. For a few saved lines a bit of writing effort you're throwing away a absolute ton of extremely valuable context. Its nice and all if you're working in a startup writing something new and hopping jobs twice a year, but in any system that needs to run long, where other people will come and wont know how the test result should look, this is nothing but bad in the long run.
@krajekdev9 күн бұрын
It is just a tool among others, in practice i find it very effective, but i use it in around 5-10% of tests.
@gileee9 күн бұрын
If you have new people working on a project and they have no idea what's going on, even tho the patterns are very clear, I think you have other issues there.
@qj0n8 күн бұрын
@@Hidacal to be honest, Nick didn't describe the benefits well. There are good applications for those tests and theit point is not to just save couple of lines Firstly, these tests are easy to be added to existing code which we assume to be correct as nobody has complained for years. Every bigger project has untested black spots which are hard to refractor - adding those tests enables safe refractor as every behaviour change is visible Secondly, in code which could generate multiple different outputs which would be considered valid. In that case, proper checking can be non-trivial so such approach make it much easier, though it couples your tests to particular result and you usually want to avoid it. But thanks to the automated diff tooling, it can be acceptable Both of those cases are things I rather saw in older, bigger projects. Startups are usually fine with traditional tests (but ofc can use approval tests as well)
@jr.ziegler8 күн бұрын
This is a really great content!!! How do you deal with that in your CI pipelines?
@modernkennnern8 күн бұрын
I've only semi-recently started testing (Yes, testing in general) and while I do use Verify, I only use it for a single test; Verifying my OpenAPI document. It's amazing for that. Conversely, I have issues creating, maintaining and especially debugging my integration test assertions; "This giga object is different from this other giga object... good look figuring out which parts are differing". I don't know why I haven't put 2 and 2 together and used Verify on my integration tests.. Gotta try that out tomorrow..
@alizia71148 күн бұрын
This is good if you trust all your teammates to be true to the verified cases, and that you at code review time can spot check to find any faults. In reality it’s just adding more review work, at least that’s how I see it 😅. I do like how it simplifies assertions though
@mannyb42656 күн бұрын
Looks like it could be useful for creating characterization tests for legacy systems
@majorax197 күн бұрын
This probably does not work well if your input changes per run. For example, if you have a complex model it can be difficult to manually create that model. Thus, some developers may create a model with random data which can change the output per test run, resulting in the snapshot being different.
@ВалентинТ-х6ц5 күн бұрын
The worst case is when a bulk of snapshots are invalidated due to some change, like adding extra header to a response
@phreakadelle9 күн бұрын
Looks amazing! I will definitely try that! Star is on it's way!
@W1ese19 күн бұрын
The performance implications have been already mentioned so I want to add another potential downside of snapshot testing: you're not able to do TDD effectively. Of course you can still do TDD with it but it will slow you down massively in case you require multiple runs until the code returns what you actually expect it to. Still I like the concept of Snapshot testing as others already mentioned for integration tests. For example we use it for code that creates a printout which should have the same look each time. So thanks that you bring this concept once again to the community!
@mkwpaul9 күн бұрын
TDD is only really useful if you have very spefiic, very clearly delinatied requirements. Like when you're implementing a known solution to a problem IMO.
@qj0n8 күн бұрын
@ i don't agree, tdd is great when developing less specified code as well. In such cases, you often write one class and then change it every time you add more code to make everything fit. In tdd, you make decisions about the code design before you write it, so you can write down all decisions about class structure and method contracts before implementation. It's just a matter of making required decisions before writing the implementations But ofc, approval tests don't work well in tdd, I use it only in rare cases
@nerfzinet8 күн бұрын
You can use Verify for TDD, it makes TDD much easier and faster. Write a test, run it, see that it's insufficient. Add code, run the test to see that what you did made sense. Keep adding code until the output looks the way you want it to, then save the output and now you have a solid test that tests everything you've done and you spent probably 98% of your time actually solving the problem instead of 50% spent writing tests for it. Or if you want the test to turn green when you've done something you can incrementally add stuff to the expected file then make it green, add more then make it green etc. All Verify does to the TDD flow is remove or shorten the step where you stop working on the problem and write more tests. Everything else is the same. To me it seems like a huge productivity boost.
@DryBones1118 күн бұрын
Snapshot tests aren't particularly suited to unit tests so I'd recommend sticking to other approaches there. Use snapshots to test external outputs/interfaces.
@alessandro-gargiulo8 күн бұрын
Extremely powerful, especially when you want to start refactoring some code without breaking the logic.
@MayronWoW5 күн бұрын
It always bugs me when people use long method names for tests when `[Fact(Name = "")]` exists (might be called DisplayName but I'm away from my computer atm). It lets you write names for tests that include spaces and shows up much nicer in the IDE and CI/CD output. Also, because xUnit uses the word "fact" you shouldn't be including the "should" in your test names. Write the name of the test like a fact such as "X does Y" instead of "X should do Y".
@dumax979 күн бұрын
We actually used a similar approach for testing code generator when implementing the controls library.
@gabrielgm2446 күн бұрын
Interesting package, but not sure how we would implement it. How would we go about using it with faker or other random data generator in builders? We use them extensively to differentiate data in lists easily (without having to explicitly define everything). I don't really want to only move the assertion logic to a verify configuration.
@ozsvartkaroly9 күн бұрын
Thank you for the video, it was interesting to learn this concept. However I'm very against its use in unit tests because (If I understand it correctly, please correct me if I'm wrong) with it, every single unit test execution would involve reading one (or maybe more?) file from disk which 1) hurts performance, 2) wears out disk hardware unnecessarily, 3) is a violation of "unit" testing since now every unit test involves reading a file from the file system. Even without this, there are many problems in general when it comes to performance of unit tests and developers have to be careful not to introduce performance issues so this can scale up to many thousand unit test cases. What do you think about this? For tests that are already slower by their nature (e.g. integration, API, acceptance, etc. tests), this is a great option to consider because there the performance hit can be negligible compared to the run time of those tests already.
@garylee8679 күн бұрын
I don't think the performance drop would be that significant. Modern SSD is extremely fast, say you have 5000 tests case (which not many projects have that many) , each of them spend extra 1ms (which is crazy slow as I know), it adds up to extra 5 seconds. Moreover, your tests run in parallel rather than sequential, in reality, those 1ms doesn't add up like this
@ozsvartkaroly9 күн бұрын
@@garylee867 In my opinion and experience, good unit tests do (and should) work on the two-three digit microseconds (~10-100 μs) level. Adding 1 ms delay to it is a performance nightmare.
@adambickford87209 күн бұрын
Wearing out the disk? This is some *serious* straw grasping
@garylee8679 күн бұрын
@@ozsvartkaroly Performance nightmare? How bad is it to have like extra 5 seconds for a test suite with 5000 tests? And that is assuming that you are running your test 1 by 1, which no one really does Not everything, actually maybe like 99% of the things we do doesn't need extreme performance, that extra several seconds pretty much never matters in real life.
@ozsvartkaroly9 күн бұрын
@@garylee867 maybe I exaggerated with the "performance nightmare" expression. Let's agree in that everyone should make benchmarks for their use case and based on that, make the decision whether it is worth or not. (Many may say it is worth) An important point of view here is that just because developer PCs may (or may not) have fast NVMe SSDs, that might not be true for CI servers where unit tests are also run frequently. By the way this debate got me curious about the exact performance overhead of this nuget package. My main and most important point on this topic: Everything is a tradeoff. By measuring things well and doing benchmarks, make sure you understand the implications of your decision before making it. It has code maintainability benefits but performance (and thus, cost) drawbacks.
@mattymattffs8 күн бұрын
We use snapshots for regression testing. Both to catch errors, but also performance regressions. The same dataset being slower shouldn't happen
@inzyster8 күн бұрын
I had no idea they had a MassTransit package, sweet!
@RobertGray7 күн бұрын
Snapshots are great if your system needs to output a file (e.g. csv or json for ingestion by another system). Otherwise you lose the ability to inspect the expected results in a PR.
@simoncropp6 күн бұрын
the changes to the snapshots are included in the pr. it actually makes pr reviews simpler
@simoncropp6 күн бұрын
snap shots are, usually, a serialization of the result of a method. that method does not need to output a file to be compatible with snapshot testing
@xcastor35 күн бұрын
You can see the file in the PR...
@tonyhenrich67666 күн бұрын
This should not replace many types of units tests. It has its use cases. Just another way to test.
@tempusmagia4867 күн бұрын
I don't know about adding files to a project, specially if we end up with a 100 unit tests in a project. it becomes more easy to understand the test with fluent assertions instead of having to enter the txt file
@cdarrigo9 күн бұрын
If I create an acceptance test using Snapshot testing, and at some point an additional property is added to the response payload, even though this won't break my app, it will cause the test to fail. Do these tests become brittle over time by raising a lot of false positives, when new properties are added to existing payloads?
@qj0n8 күн бұрын
@@cdarrigo well, this is exactly the point of approval tests (aka snapshot tests though I don't like this name). It's good if you want to manually review every externally visible change. If you don't care about it, you can traditionally just check properties you want. That's why those tests are often written before refractor - to assure that behavior is unchanged Also, those tests are good when you don't expect the result to change often and it's not trivial to check automatically if it's correct. Like if you write your own serialiser to some format then the results might have different correct values
@DryBones1118 күн бұрын
Then you check in the change to the snapshot of the added property. Any code reviewer can see immediately in the review diff what was changed and if the task was "add this property" then they'll approve.
@berndr9453 күн бұрын
@nickchapsas Could you do a video on how we can rapidly cover testing, using Verify package, InlineData attribute (or similar) plus some seeded Bogus generators?
@frankquednau8898 күн бұрын
Nick on mapping: I‘m fine manually mapping stuff onto each other, nice and explicit, 100s of loc is no prob. Nick on testing: Hey, I don‘t want to manually compare stuff, let me automate that part and make it more opaque by turning compile time errors into runtime errors.
@frankquednau8898 күн бұрын
But granted we do use snapshot testing in the frontend since half a decade ago, they do have their space.
@simoncropp6 күн бұрын
the difference is mapping is production functionality that is fragile and difficult to debug. verify is primarily a simple serializer and u rarely need to debug into that serialization
@CarrigansGuitarClub5 күн бұрын
Not sure on this package....I can see many developers accepting the expected result and not verifying the actual results themselves!
@thomasleberre11198 күн бұрын
Nick, do you think there a way to mix it with TUnit ?
@BasyrasCZ9 күн бұрын
Nah I will pass this one
@MarvijoSoftware9 күн бұрын
Same, but will definitely consider it for niche integration tests
@qj0n8 күн бұрын
It's a good technique when set off acceptable results is bigger and checking the properties of the result is complex. In that case you couple your test to particular result, which usually od a bad thing, but the whole automation of diff review make it ok
@matthewjames16488 күн бұрын
Yeah I will try this one
@seniors-vg2kd8 күн бұрын
it can be useful to cover old uncovered functionality...like "i don't know how and why it works but noone has complained for a years so i believe it does what it suppose to do"😅 ...for my own wrote code - NO!...i don't trust myself writing correct logic and often some edge cases pops out which leads to serious requirement adjustments as noone thought about them
@qj0n8 күн бұрын
@@seniors-vg2kd it's OK for your own new code when e.g. You want to test pdf generation - just look at the result,n of its fine you approve it and you have a test detecting that something has changed. The same for app screenshots. Or from my own recent experience - json schema generator - there are hundreds of possible correct results, but I just review result and approve
@majmicky6 күн бұрын
What about using Bogus with Verify? This looks very nice for even contract changes to validate them
@nickchapsas6 күн бұрын
I am using Bogus in this video so it works fine as long as you use a seed
@xcastor35 күн бұрын
@@nickchapsas What about AutoFixture? Very often it generates values like 'Property', which makes it impossible to be 'Verified'. Any suggestion besides setting those properties manually?
@Jonathoncu29t9 күн бұрын
I don't feel it is necessary in the context of unit test. It simply just created the expected output for me. I would rather hand roll it to make sure I know what I am doing. Remember the previous video of AutoMapper? does it really necessary to have a third party library to do a simple thing?
@diadetediotedio69189 күн бұрын
I think if he uses it extensively in his site, he think it is worth it and that's why he's showing it. Not sure what you want exactly.
@diadetediotedio69189 күн бұрын
After watching the video to a full extent, I feel your comment is even more misguided. He literally said the question is not just about unit tests, and it is not a "simple thing" because it involves a lot of little headaches, the package is solving them for you, it is very popular and has maintainers willing to make it better over time (which your own solution will probably lack, because most of the time you are doing things that are more productive to your use case).
@Jonathoncu29t8 күн бұрын
so the video mentioned that it was `not just about unit tests`, and my entire comment was `in the context of unit test`, Does it mean both agree or my comment was `even more misguided`?
@Jonathoncu29t8 күн бұрын
A open discussion of my opinion about the library in a particular context. I am pretty sure I would find this library useful in various use case. I did not mention anything that `it is not worth it` entirely?
@puntoycoma478 күн бұрын
I've doing this for ages, did not know it had a name
@AEF23C208 күн бұрын
any existing tests do not solve the problem of speed testing, which means everything is useless
@carlitoz4502 күн бұрын
it would be fair to mention that all of this is a successor of ApprovalTests that was developped many years ago by Llewelyn Falco...
@dmstrat9 күн бұрын
yeah, this is what I call "draw the target around the shot arrow" - not my preferred approach as it is usually used, in my experience, in writing tests after the code instead of the tests before the code.
@SuperLabeled9 күн бұрын
At the end you kind of gloss over a concept I'm intrigued by. Taking a screenshot of my current application and verifying it. Do you know of any tools to help automate the creation of those screenshots?
@jfpinero4 күн бұрын
we have some repos with thousands of tests, ain't no way we are storing thousands of verify files
@nonamezz208 күн бұрын
I wonder if I should do this manual action for 3000 tests the suite currently running. It’s a no from me.
@xcastor35 күн бұрын
It doesn't have to be all or nothing. You can start doing it for new tests. Then, when you have to change something in an existing test, you can also replace the assertion if you want.
@homeropata9 күн бұрын
Is it okay to use random input in the integration tests? I feel that it misses some verification. I like to see if the correct value is saved in the proper place, no mapping error along the way, etc.
@billy65bob8 күн бұрын
Presumably this is why 2 identical guids were replaced with Guid_1 in the example output. The exact value doesn't matter, but them being consistent very much does.
@Robert-yw5ms8 күн бұрын
Briefly flirted with snapshot testing in react years ago. Then it was a disaster because every change we made also changed the html we would render. Maybe it was better suited to the backend all along.
@robadobdob8 күн бұрын
I don’t know what snapshot library you used but some allow for small variations before they will fail. E.g. if a snapshot added a line break, it can ignore it. But even then, for UI testing, seeing how markup is changing is what you want to happen - especially if it changes and you didn’t expect it to.
@qj0n8 күн бұрын
@@Robert-yw5ms in case of ui layer, approval tests aka snapshots are quite good if you want pixel-perfect precision. We used to have such a tool when testing installers translated to 25 languages - translators manually reviewed all screenshots to check if no invalid linebrake or any other issue occurs and we stored the screenshots to compare automatically Ofc Every change required another review process so we were doing our best to never change any pixel. But that was the price of the quality bar expected by business (although the company was Intel and recently it's not said to have high quality ;) )
@Robert-yw5ms8 күн бұрын
@@robadobdob We decided to focus in on what actually matters in our tests. Asserting that specific elements appear. That inputs have the correct values. Are enabled or disabled at the correct moments. That the header and footer is actually there. That the backarrow is present. That clicking the backarrow goes to the expected page. Etc. Asserting that the html is the exact same as the last time is just a brittle test.
@DryBones1118 күн бұрын
I'd agree that snapshots are ill suited to HTML. HTML is interpreted by a computer but ultimately by a human. 2 very different HTML snippets can be interpreted or even rendered identically to a user. Snapshots are better suited to programatic APIs like HTTP responses or stdout.
@robadobdob8 күн бұрын
@ snapshot testing HTML components is not to check how it looks, rather it is to verify that the markup is how you expect it to be under different test cases.
@GuidoSmeets3858 күн бұрын
Does it work well with gRPC?
@x0rld1597 күн бұрын
have you tried the new auto run tests in rider ?
@PeriMCS7 күн бұрын
Back in the day there was sth called Approval tests. It worked exactly the same. Is this rewrite? BTW terrible test naming convention ;)
@terencetcf9 күн бұрын
Looks like the snapshot from JEST testing. It simplifies things but there is always pros and cons. I have used similar snapshot with Jest for nodejs api projects, not all projects though, coz not everyone agrees it and to use things like that, everyone in the team needs to be very careful, not to abuse it etc...
@weluvmusicz9 күн бұрын
I don't get it. Whats the benefit over a normal unit test?
@mojizze9 күн бұрын
Lol. Looks kinda stressful
@nickchapsas9 күн бұрын
Simplifies the setup and assertion significantly
@parlor31159 күн бұрын
You first run the test and decide if the output is good or not. If it's good, then it's saved and becomes the "expected" for future runs. If it's not, then you adjust the SUT and try again.
@andreaslassak21118 күн бұрын
@@parlor3115 why not just write correct assertion :D ?
@parlor31158 күн бұрын
@@andreaslassak2111That's the appeal of it, to some people at least. That you write less code
@mt89vein8 күн бұрын
I use snapshot testing for document parse results, because i have thousands of them including versioning. It is super easy to add new test cases for parser - i just add one more document to folder and that's it. Also this technique can help you with API testing to verify responses or testing EntityFrameworkCore SQL generator, so you will know when EF decides to emit another version of query, especially after updating it.
@AlexParaVoz5 күн бұрын
Good for integration tests, but probably bad for unit tests. Unit tests should be independent and expressive. Imagine how many unit tests will fail with this approach after adding new field to model, probably all of them
@ninjis9 күн бұрын
Does Dometrain offer any sort of certification or proof of progress upon completion of its courses?
@nickchapsas9 күн бұрын
Yes
@nanvlad9 күн бұрын
I guess it works only with static input/output data. For the data generated by Bogus we can't use Verify(). P.S. Waiting for Property-based testing video
@RafixxGaming8 күн бұрын
If you are gooing to set the seed for bogus generators You will get the same ones every time you run your tests
@BreakerGandalfStyle8 күн бұрын
@@RafixxGamingwhich would make using Bogus kinda redundant…
@volsand9 күн бұрын
Maybe this would be nicer if the expected result was in generated code maybe? (and why does it need to keep the received results in common files? since everything is already in memory it could just load the expected result and compare them)
@mkwpaul9 күн бұрын
Having the expected result in code means you need to recompile when you want to test with updated data. Loading data from an assembly is also loading it from disk so its not any faster than loading it from a text file. And finally, text files are really really simple. Its just the data and nothing else. Imagine using it long term and having a detailed, clear history of snapshots of what the results of your software looked like. Having that in code would just add noise when reading and technical complications.
@francisgroleau29 күн бұрын
Could you pass multiple different snapshot to the same test ? like with NUnit Theory ?
@sergeyborisov34378 күн бұрын
Works nicely. You may need to customise a file name generated
@arkord769 күн бұрын
Indeed, this is a great lib. But on Windows I faced the issue that because of the length of the Verify .txt filenames it can lead to a path character limit error.
@gileee9 күн бұрын
Doesn't git and vscode and other apps override the windows path limit when you install them
@arkord769 күн бұрын
@@gileee Don't know. I had to use the `git config --global core.longpaths true` command instead. But this was some years ago.
@gileee9 күн бұрын
@@arkord76 The installer for git has a checkbook to do that immediately now. Don't know if it's checked by default.
@arkord768 күн бұрын
@@gileee Good to know! Thanks for the tip.
@izobrr9 күн бұрын
This is DDT - development driven testing
@kristofbe12 күн бұрын
Won't this slow down the tests, because it needs to read the files? I can understand this would be handy for integration tests, but for unit tests you ideally want everything to be in-memory.
@Robert-yw5ms8 күн бұрын
Safe to say nick doesn't practice tdd
@mmmmkkk3 күн бұрын
It does sound like an optimization for some scenarios, but one big disadvantage of this approach is that it almost completely removes the documentation aspect of the unit/integration tests. Normally you can just look at the asserts and see what's the expectation, now the snapshot will contain tons of stuff and no info how it's all bound together. Anyway, definitely NOT THE ONLY TYPE OF TESTING YOU NEEED :) @nick any chance you give up the clickbait titles at one point?
@delu2 күн бұрын
The verified file is named the same as the test, and Visual Studio groups them together
@mmmmkkk2 күн бұрын
@@delu and how does it change anything in this aspect? :)
@delu2 күн бұрын
@@mmmmkkk I'm just saying that with a little support from the IDE you gey easy linking between test and verified file to check
@delu2 күн бұрын
Anyway, I'd like to also point out that 99% of full caps titles are clickbaits
@mmmmkkk2 күн бұрын
@@delu ok but that wasn't my point, if you just persist a full blown json response it doesn't have the same documentation value as you explicitly saying, ' I'm providing this input and this is my expectation for the output, with those references between them'
@ruslan_yefimov9 күн бұрын
This is the first library that you've really sold me, huh.. Sounds like it can save a LOT of developer time at writing integration tests
@nickchapsas9 күн бұрын
It has saved me such an insane amount of time it’s crazy
@andreaslassak21118 күн бұрын
"The expected result is generated by the library" this basically scares me a lot. You just putting the all the responsibility of correct assertion into 3rd party library. Another risk to just hiding the expected results into another external folder. Huge risk in from my QA perspective.
@wolfinpain7 күн бұрын
Those damn timestamps though
@kakskdjfnakodndn8 күн бұрын
Looks great for a small, personal project. Looks like a nightmare in an enterprise setting.
@BreakerGandalfStyle8 күн бұрын
I was thinking the exact same thing. I can’t see that working for bigger teams and/or lazy PR culture. Imagining validating json data when reviewing tests sounds like a nightmare to be honest. I can only see that to work if you are working with team mates you are really trusting, but on the other hand just letting pass every test in a PR review is an anti pattern lol. Another thing is that this won’t work with random data.
@simoncropp6 күн бұрын
@@BreakerGandalfStyle you dont validate the whole json snapshot when reviewing a pr. you only review the differences
@Sefriol9 күн бұрын
While there seems to be a lot of usability in this kind of library, I think it requires very specific and complex use case to really be worth it. Many times, copilot can automatically complete most of the test cases. After you learn how to code with it. I am not sold that this would make more than marginal difference to the development speed. It seems to have drawbacks as well. In simple cases snapshotting seems nice, but when you suddenly have to configure settings for it and modify the snapshot result, it seems pretty much the same what you would do normally in any other test framework. When you add mocking (which I try to avoid, but in microservices you kinda have to do it) and other testing requirements on top of this, it makes testing environment even more complex than it needs to be. And you loose the glance value from the tests' contents. Now you need to look two files to know what specific test does and now you also have more files in git. Not sure how this affects test refactoring as well. Ofc, every project can choose which tools to use. Many still use libraries similar/like Automapper (and part of the Automapper criticism could be applicable for this library too). In some very complex test cases, maybe this provides value, but I would advise against using this library by default.
@TheBekker_9 күн бұрын
Looks awesome for integration tests! not sure i'm sold for Unit testing :)
@another_random_video_channel9 күн бұрын
Shut up and take my money
@mnavarrocarter9 күн бұрын
Please no. Don't do this. You are making your tests extremely easy to break. Ideally, a test should fail only for invalid changes, and not for any change. Otherwise, you will spend all of your time fixing tests for any change on your code.
@gileee9 күн бұрын
I disagree. This tests the response from a system. The whole response should be part of your contract and any change to that contract should be expected, tested, validated and communicated to others in the team that depend on it. Like the frontend, because it might require a change in their end and after their change your service is required to uphold that contract. But even in general, changes of data and behavior will require you to change your tests now anyway because their requests might be missing fields, or the asserts expect fields that are now missing, or have additional fields it didn't expect. It's more dangerous to allow your system to glance over those changes if someone forgets to update the tests, because tests basically replicate real scenarios users depend on. Changes should be handled with care.
@mnavarrocarter8 күн бұрын
@gileee you are not testing contract here. You are testing value. Any change in the value fails the test, even when is a valid change. This is analogous to test JSON endpoints comparing two JSON strings versus asserting the response validates against a JSON schema. Which test is more robust and resilient? A string comparison that can fail for any small detail changed, or a proper contract definition like a JSON schema?
@gileee8 күн бұрын
@mnavarrocarter The only way to know if your api is respecting a contract is to test the data it produces. Don't see how this approach makes incorrect assumptions about the data received like you implied. I mean you're basically using a json schema which you point out as the correct option. It doesn't lock anything down. Nick even showed how you can scrub data from the verify templates to make it agnostic to specifics on the level you choose. I mean this library is just a slightly different way to write regular tests, you just don't have to write the asserts yourself. No one's stopping you from only validating the response for a 401 Unauthorized status for example and only asserting that in some test, or whatever else you need to do. This is just like a nice, strict and flexible replacement for something like the FuentValidation's "BeEquivalentTo" function that validates everything at once and also doesn't require recompiling the app to change the test data.
@ilia-tsvetkov6 күн бұрын
We use this library for almost all tests in our solution, even for checking generated SQL. The library is excellent and has elevated the quality of our tests to a new level.
@IronDoctorChris8 күн бұрын
I'm a big fan of snapshot tests in appropriate places but using them as a default is a terrible idea and smacks of laziness imo. You lose all context that assertions give you, of what behaviour the dev writing the test is trying to prevent changing. It's by definition the most brittle you can make the test (except if you were asserting against internally used mocks).
@temp508 күн бұрын
Call me old fashioned but I don't like this.
@GeorgeGeorge-u5k6 күн бұрын
Dear youtubers please stop having clickbait thumbnails or you will be in my block list.
@TheXaronv7 күн бұрын
Hello. I've tried it for Blazor using verify.blazor but when i try to write Verify(component), it says it doesn't know the method Verify... What i did wrong ?