The best way for a ball park estimate for the testing effort is relatively simple- just take one third of the effort required for the complete development of an application. However, sometimes this total effort may not be available, for example, if the development and test vendors are different or the development and testing practices are not integrated. In that situation, the testing team needs do an independent sizing of the application, using a technique like Function Points and then computing the number of test cases.

One of the best known techniques are the ones provided by Caper Jones and David Longstreet. According to Caper Jones,

**Total **number of test cases = (Count of Function Points) raised to the power of 1.2.

David Longstreet provides a similar formula for computing the number of UAT test cases:

Total number of **UAT **test cases, which is = 1.2 x (Count of Function Points).

As with every such empirical model, one has to use them as a guide and evaluate them with real data. In my own experiences with web based (Microsoft dot net platform), the numbers are slightly different, and approximate better as follows:

Total Number of test cases = (Function Point Count) raised to the power 1.05

Total number of UAT test cases = 1.35 x (Function Point Count)

Once the count of the test cases has been found, apply the productivity factors for test case authoring and test case execution to arrive at the total testing effort.

The following is the data for a 3000 function point project on which the above formula have been tweaked.

Unit Test Cases : 13,000

Integration Test Cases : 5,000

System Test Cases : 13,000

UAT: : 4,100

Total : 35,100

References: Estimating Test Cases and Defects by David Longstreet

Software Estimation Rules of Thumb by Caper Jones (pdf)