Manuals >User's Guide >Optimizing
Print version of this Book (PDF file)
prevnext

Weighted Optimization

Each data set being optimized can be assigned a different weight. For example, suppose data sets A and B are being optimized simultaneously and A has a weight of 1 while B has a weight of 2. Data set B is assumed to be twice as important as A, and the optimizer tries twice as hard to fit B.

In weighted optimization the error vector in the optimizer is directly weighted, so weighting can be used in absolute error or percent error optimization. Since the error vector is used to calculate the Jacobian (sensitivities) and terminating conditions, these are affected also.

All of the weights are normalized to 1.00 during the optimization, so weights of 1.00 and 2.00 have the same effect as 0.50 and 1.00. This means that data with a normalized weight of less than 1.00 has a lower sensitivity to the parameters being optimized and looser terminating conditions. If the specified terminating condition is a one percent error, data with a normalized weight of 0.50 only needs to be below two percent error to cause the optimization to terminate.


prevnext