Performance of Floating-Point Data Types - Test

This section describes testing results of performance comparison on floating-point data types: 'float', 'double', and 'decimal'.

If you run the tutorial example presented in the previous section, you will get this output:

Float: time = 00:00:00.8412264
Input = 0.3333333
Output = 0.3333334
Double: time = 00:00:01.0014600
Input = 0.333333333333333
Output = 0.333333333333333
Decimal: time = 00:00:27.5301354
Input = 0.3333333333333333333333333333
Output = 0.3333333333158765326639153057
Accuracy:
Float = 1.788139E-07
Double = -9.99200722162641E-16
Decimal = -5.23704020082540828E-11
Performance:
Float = 0.84
Double = 1
Decimal = 27.49

Question 1: Why it takes almost the same mount of time for the float type as the double type?

This is expected. I guess that today's processors are designed with double type data as default. An arithmetic operation of the double data type requires probably the same number of processing cycles as the float data type at the machine level.

Question 2: Why takes about 27 times longer for the decimal data type than the double data type?

I don't have a good answer.

Question 3: Why the result of the decimal data type is much less accurate than the double data type?

This is big surprise to me. Can any one answer this?

If you are a designer, wait for a good answer to this question before allowing any developer to use "decimal".