If for reasons of cost, space or time an implementation is not generated with a certified code generator, then there is a gap in the formal chain from the model to the actual implemented system. Testing is a way to bridge that gap, but it is time-consuming, costly and incomplete. Again formal verification significantly helps to improve the testing process by complementing any set of test vectors, which might be set up by e.g. heuristical approaches.
Technically this Automatic Test vector Generation (ATG)  first computes the coverage obtained by the existing test vector set so far. Remaining test goals such as uncovered code, states, conditions, output signals etc. are derived from this analysis and automatically specified for the model checker. Iteratively the model checker now looks for and finds a test vector for each remaining test goal.
Since by chance with each new test vector, also other test goals might already have been covered already, after each iteration the coverage analysis computes the set of uncovered test goals so that no test goal is computed twice by the modelchecker. Since the modelchecker always finds a test vector for a test goal if there is any, the progress of this procedure is guaranteed until the targeted test vector coverage is reached, even if it should be the entire model.