I've been trying to find you a nice link where this is explained. I'll keep trying, but in the meantime here is a short answer.
This is an artefact of representing decimal fractions in binary.
In any base system, not all fractions can be expressed exactly in a finite number of digits. Think of 1/3 in base 10. You need an infinite series of 3s after the decimal point to express this fraction exactly. Any finite expression is approximate.
That is the case because the decimal digits in base 10 express inverse powers of 10 (a/10^1 + b/10^2 + c/10^3 + ...), and there is no finite series of inverse powers of 10 that sum to exactly 1/3.
On the other hand, 1/2 can be expressed exactly in decimal digits, because 5/10^1 is exactly 1/2.
The same thing is true in binary, but for inverse powers of 2. 1/2 can be expressed exactly, because 1/2^1 is (trivially) 1/2. But 1/10 or 2/10 cannot, because there is no series a/2^1 + b/2^2 + c/2^3... which sums to exactly 1/10 or 2/10. Those two numbers can be expressed exactly in a finite expansion in base 10, but not in base 2. The same is true for many other fractions.
(This can be expressed better, in terms of primes.)
Back to your example, 2.51 (in base 10). Can it be expressed exactly in base 2? I should work this out explicitly but for now I will draw the lazy inference that it can--but that doing so requires more decimal digits than are available in FLOAT32.
When you type decimal 2.51 and it is stored as FLOAT32, it becomes the nearest binary approximation available in that type, which is not exact. Conversion back to decimal shows a slightly different number than what was entered.
When you type decimal 2.51 and it is stored as FLOAT64, it can be expressed exactly, given the larger number binary digits. When it converted back to decimal, there is no loss of precision.
At least, that is my inference. I should check the actual expansions of 2.51 as FLOAT32 and FLOAT64 if I were not so lazy.