Yes, and no. It all depends how you do it, AND what you count in “development time”.

First of all, good automated tests can “massage” your code every day, pretty much as often as desired. They will help the team find unintentional mistakes and thinking failures much earlier than without automation (most non-automated testing is done late in the development process). This allows much earlier correction (before problem accumulate) and also reduces the overall amount of bugs. So from this aspect, it can reduce the overall duration of the project very significantly.

If we write automated tests (unit, acceptance, integration, etc.) after the code is written, a lot of the manual work of coding is already done (design, coding, manual debugging, manual developer testing, etc.), so in this case the design and writing of the tests is an additional effort, and yes, in this case it increases the “development time” of the said piece of functionality. However, if done well, it can still reduce overall project effort and duration as mentioned earlier.

If we use the design and incremental test creation to _drive_ the coding and development effort, an argument can be made that writing tests actually speeds up development effort. In this case, the design of the tests (which requires clarity of requirements) also contributes to the design of the code, and much duplication of effort can be avoided. It also focuses on testable designs, which makes writing the tests easier. People competent in this effort spend less time for writing the tests plus the code than for coding and testing (and debugging) manually. This test-first approach applies to (almost) all levels of testing.

Plus, of course, the robust test suite eliminates most of the bugs that would’ve otherwise been created and the overall development effort can be reduced significantly through the elimination of the “long tail” (of manual testing and bug correction), and also the maintenance of the code is likely to be easier.

There is an additional benefit to test-driven approaches that most people are not even aware of.

Let’s say that I need to add a small feature “the traditional way”. Before making the change, I will have to do potentially significant amount of preliminary study to understand the context of the code and how to safely change it. Then I will make the modification, and manually test it. I will also have to manual testing beyond the immediate change to try to detect if I’ve caused regression in some other part of the code. And even then, I can’t be sure I got it all right (and the number of bugs in an approach like this is pretty strong evidence that I normally don’t).

In the step before and after the actual change, I will be spending significant amount of effort because of fear. I’m afraid that I will mess things up, in some potentially really bad way. So I will take enough time to build the confidence to change the code. That might account to 90% or even more of the overall time to make the change.

Consider, instead, if I had a robust reliable automated test framework that would, prior to my change, pass 100% and demonstrate (with very high confidence) that the system works. I could probably skip much of the preliminary study, and simply make the most obvious solution (and add the tests for it). I would then simply run the tests and if they all pass, I would have a very high confidence that I have not broken anything, and I can check in the code and even deploy it to production.

How much project time could an approach like this save in a project, if everyone used this approach?