defaultNumericalPrecision

The default numerical precision. This represents an estimate of the precision expected for a general numerical computation. For example, two numbers x and y can be considered equal if the relative difference between them is less than the default numerical precision. This value has been defined as the square root of the machine precision