In floating-point formats, numbers are represented with a sign s, a significand (also called a fraction) f, and an exponent e. E.g., with binary floating-point, the value represented by s, f, and e is (-1)s•f•2e
f is restricted to a certain number of digits and, with a floating-point format using two for its base, is typically required to be at least one and less than two. The smallest change that can be made in the number (with certain exceptions discussed below) is to modify the last digit of f by 1. For example, if f is restricted to six binary digits, then it has values from 1.00000 to 1.11111, and the smallest change that can be made in it is 0.00001. Given the exponent e, a change of 0.00001 in f modifies the value represented by 0.00001•2e. This is the unit of least precision (ULP)
Note that the ULP varies depending on the exponent.
The exceptions I mentioned occur at the very largest representable finite value (where the number can only be increased by producing infinity), the very smallest (most negative) representable finite value, at zero and subnormal numbers (where special things happen with the fraction and the exponent), and at boundaries where the exponent changes. At those boundaries, you are decreasing the exponent, which means the value of the least significant digit of f decreases, so the step is actually ½ of the old ULP.
When a single operation is only limited by what numbers the floating-point system can represent in its finite range (rather than exceeding that range), then the maximum error in a result is ½ of an ULP. This is because, if you were further than ½ of an ULP from the mathematically exact result, you could alter the calculated result by 1 ULP so that its error decreases in magnitude. (E.g., if an exact result is 3.75, changing from 3 to 4 changes the error from .75 to .25.)
Elementary arithmetic operations, such as addition, multiplication, and division, should provide results rounded to the nearest representable result, so they have errors that are at most ½ an ULP. Square root should also be implemented that way. It is a goal for math library functions (such as cosine and logarithm) to provide good rounding, but it is hard to get correct rounding, so commercial libraries generally do not guarantee correct rounding.
Conversions from decimal (e.g. in ASCII text) to an internal floating-point format ought to be correctly rounded, but not all software libraries or language implementations do this correctly.
Compound operations, such as subroutines that perform many calculations to get a result, will have many rounding errors and generally will not return a result that is within ½ an ULP of the mathematically exact result.
Note that it is not technically correct the say the size of the error when representing fractions is proportional to the size of the number stored. The bound on the error is roughly proportional—we can say ½ ULP is a bound on the error, and an ULP is roughly proportional to the number. It is only roughly proportional because it varies by a factor of two (when using binary) as the fraction ranges from one to two. E.g., 1 and 1.9375 have the same ULP because they use the same exponent, but the ULP is a larger proportion of 1 than it is of 1.9375.
And only the bound on the error is roughly proportional. The actual error depends on the numbers involved. E.g., if we add 1 and 1, we get 2 with no error.