Should We Write a Unit Test or an End-to-End Test?




It most often appears as a philosophical conversation along the lines of: When we can only write 1 test for this feature, if we write a unit test or a end-to-end evaluation? Basically, time and resources are restricted, so what kind of evaluation would be most effective?

They answer the question: Does the machine ultimately deliver what it requires to deliver? I have also encountered scenarios, however, where individuals are hostile to device testing and view it as a waste of time. They advocate using end-to-end tests exclusively and view unit tests as restrictive to evolving the machine, requiring too much time and effort to refactor, or redundant, provided that the overall behaviors of the system are confirmed by finishing tests.

In this article, I'll provide my view on this question. For somebody who has worked in another domain, where calculating and analyzing the entire software process is simpler, or where the operational environment is much more forgiving of error, I could understand how their experience might differ. I've worked on hosted solutions as well as infrastructure that's set up on-premises and run by the customer. 

These systems are composed of numerous unique components, must perform faithfully and consistently, and meet crosscutting requirements in terms of security, scalability and performance. These programs need to evolve to include new functionality or bug fixes, without introducing regressions in existing functionality or behaviour. Testing these systems end-to-end is always a challenge, as there are quite a few dependencies that must be in place in order to test even a little portion of the general system. Reproducing the diversity of issues encountered in operational settings can also be challenging.

End-to-End Tests

An end-to-end evaluation is normally either a functional test that confirms that a specific aspect of the system acts as expected, or an acceptance test which not only verifies that the aspect functions correctly, but additionally confirms that the wider system continues to meet prerequisites concerning performance, scalability, security, maintainability, and so on. 

End-to-end tests usually require deploying the system and its dependencies, which can have a significant quantity of time and resources. The tests can have a very long time to conduct and therefore are subject to variability, given the number of moving parts.

I just wouldn't live without this. This evaluation not only verifies that the attribute works, but it also verifies that it works under normal operational conditions. The concluding test is indispensable. I want to be aware that the system I'm delivering meets the requirements and will continue to meet the needs as the system evolves in the long run.

One other important consideration with regard to end-to-end evaluations is the most complex and subtle bugs in applications systems can't be seen in isolation and are just struck when exercised as part of an integrated platform. I have written previously about a set of challenging bugs that I encountered that couldn't be detected with unit tests, or maybe a functional evaluation exercising one aspect in isolation. End-to-end tests allow you the chance to examine the application under conditions in which these bugs arise and are valuable for building robust and reliable software programs.

Unit Tests

A unit test typically has no dependencies, may be executed in milliseconds, and so is absolutely reliable.

End-to-end tests serve a vital purpose, but when a system is analyzed mostly or exclusively with end-to-end evaluations, difficulties arise. The Google Testing Blog post Only Say No More End-to-End Tests does a fantastic job of characterizing the issues of relying on too many finishing tests. Considering that the inevitable complexity of finishing evaluations, it can take hours or days to get feedback. 

When an end-to-end test fails, it is often rather tricky to identify the element that resulted in the failure. Even then, the collapse may often be a consequence of the variability introduced by the evaluation infrastructure itself. By comparison, unit tests are quick, reliable, and they isolate failures to a specific unit of code. The Google article indicates that tests are effectively organized as part of a pyramid where the biggest number of tests are unit tests, followed by a moderate number of integration tests, and a smaller quantity of end-to-end tests.

I worked for a number of years on a software system that was analyzed almost exclusively with end-to-end tests. It might require a day or more to run these tests. There were regular evaluation failures as a consequence of the test infrastructure . After a test failed, it was often very tough to determine why. The group spent a great deal of time characterizing test failures, a costly activity that has been accepted as part of the everyday routine. Rinse and repeat.

I believe unit tests serve a distinct function and are free to accumulative evaluations. Unit tests are somewhat more about programmer productivity and imagination, rather than verifying that the system is functioning properly. For a developer, the opinions cycle derived from end-to-end tests is too long. If the activation energy necessary for experimentation and opinions becomes too high, it will become a barrier to exploration, learning, and making progress. Unit testing provide almost instant responses. They're supportive of experimentation, since there is no need to deploy the system to run the tests. Unit test can easily be implemented on the programmer's workstation and integrate seamlessly with IDEs.

Unit tests, unlike finishing evaluations, can easily be implemented when code is committed to source control. If a unit test fails, then the commit could be rejected. This means that the code under source control can be maintained in a state where it is always practical. This has great benefits, particularly when dependencies are shared across several teams or projects.

Another complimentary facet of unit tests is that they support thinking about the way in which the code has been factored. This is not something end-to-end tests generally promote. Test Driven Development (TDD) has been a popular practice for helping developers write better code. While TDD does not necessarily prescribe unit tests, they are generally essential to it. 

I'm not a purist in terms of TDD. I seldom write formal device tests upfront. As I'm working, I generally write tests to support experimentation, preferring to formalize my evaluations once I have settled on a course forward. After I started unit testing, however, it certainly helped me build better interfaces and easier implementations than I would have otherwise.

At the introduction, I mentioned two specific concerns regarding writing unit tests along with finishing evaluations. The first was that the overlap between unit tests and end-to-end tests. There will not necessarily be a one-to-one relationship between a unit evaluation and an end-to-end evaluation, but there'll continually be duplication. I think it's effective to test the same operation under different conditions. I really embrace this particular reproduction. 

I'd rather something be tested twice, instead of not at all, and that I believe considering the same evaluation from other perspectives, at different times, by various people, ends up enhancing the overall system. The value in this investment is not always apparent. Often it is not the test artifacts themselves but the act of analyzing that ends up enhancing the total system and the skills of the people who work on it.

The second issue was that unit tests make the machine difficult to evolve. I find easy unit tests that focus on testing a single thing per test method are rarely difficult to refactor, if refactoring is even necessary. I have never discovered unit tests to be a burden in terms of evolving a system.

In fact, I think a system analyzed largely with finishing tests ultimately becomes harder to evolve, as individuals become fearful of making changes when they cannot readily characterize the consequences. I am able to understand when unit tests involve a great deal of mocking that you feels it's near impossible to evolve the system. 

However, by using a great deal of mocking, one inevitably ends up tying the tests directly to the execution. When a test requires mocking, I usually reconsider the plan and modify my approach so that it becomes unnecessary, or I reevaluate whether the feature will be more efficiently tested with only an end-to-end test.

My Answer?

Whenever possible, I write both a unit evaluation and an end-to-end test. I see unit tests as free to complete tests. End-to-end tests verify the behaviour of the system as a whole, whilst unit tests encourage programmer creativity and productivity. I embrace the diversity of testing the identical aspect from multiple perspectives. I like how unit evaluations notify software design and organization and maintain the code base healthy when they need to pass so as to perpetrate code.

Comments

Popular posts from this blog

Is There Any Difference Between Software Quality Assurance Testing?

What is DevOps?

What's Trending in Automation Testing Services 2019?