Accuracy of "decimal" Data Type

This section provides a tutorial example on how to compare the accuracy of 'decimal' data type operations with 'double' data type operations.

How much more accurate is "decimal" than "double"? Hope the following program will help us answer this question:

// Accuracy.cs
// Copyright (c) 2006 HerongYang.com. All Rights Reserved.

using System;
class Accuracy {
   static void Main() {
      long m = 1000;
      long n = 85;
      long i;
      double ds, df, da, dmin, dave, dmax;
      decimal ms, mf, ma, mmin, mave, mmax;
      dave = 0.0d;
      mave = 0.0m;
      dmin = double.MaxValue;
      mmin = decimal.MaxValue;
      dmax = 0.0d;
      mmax = 0.0m;
      Random r = new Random();
      for (i=0; i<m; i++) {
         ds = 0.5d + r.NextDouble()/2;
         df = 0.5d + r.NextDouble()/2;
         ms = (decimal)ds;
         mf = (decimal)df;
         da = LoopDouble(n, ref ds, ref df);
         ma = LoopDecimal(n, ref ms, ref mf);
         dave += da;
         mave += ma;
         if (da<dmin) dmin = da;
         if (ma<mmin) mmin = ma;
         if (da>dmax) dmax = da;
         if (ma>mmax) mmax = ma;
      }
      dave = dave/m;
      mave = mave/m;
      Console.WriteLine("# of tests: {0}", m);
      Console.WriteLine("# of operations: {0}", n);
      Console.WriteLine("Accuracy with double:");
      Console.WriteLine(" Average: {0}", dave);
      Console.WriteLine(" Minimum: {0}", dmin);
      Console.WriteLine(" Maximum: {0}", dmax);
      Console.WriteLine("Accuracy with decimal:");
      Console.WriteLine(" Average: {0}", mave);
      Console.WriteLine(" Minimum: {0}", mmin);
      Console.WriteLine(" Maximum: {0}", mmax);
   }
   private static double LoopDouble(long n, ref double s, ref double f) {
      long j;
      double o = s;
      for (j=1; j<=n; j++) {
         s = s*f;
      }
      for (j=1; j<=n; j++) {
         s = s/f;
      }
      return Math.Abs((s-o)/s);
   }
   private static decimal LoopDecimal(long n, ref decimal s,
      ref decimal f) {
      long j;
      decimal o = s;
      for (j=1; j<=n; j++) {
         s = s*f;
      }
      for (j=1; j<=n; j++) {
         s = s/f;
      }
      return Math.Abs((s-o)/s);
   }
}

Output:

# of tests: 1000
# of operations: 85
Accuracy with double:
 Average: 4.37517929465509E-16
 Minimum: 0
 Maximum: 1.97608339459022E-15
Accuracy with decimal:
 Average: 0.0000257496045277392923596642
 Minimum: 0
 Maximum: 0.0032269411262819481637932532

Well, the result is very surprising and disappointing. A value between 0.5 and 1.0 multiply by another value in the same range for 85 times, then divided by the second value 85 times. The resulting value should the same as the first value, if all the operations are 100% accurate. However, this program shows that the "double" values and operations did pretty well with relative errors in the order of 1.0E-16, comparing to 1.0E-5 for "decimal" values and operations.

So, what's going on here? What's the benefit of "decimal"?

Table of Contents

 About This Book

 Introduction of C# (C Sharp)

 Data Type and Variables

 Logical Expressions and Conditional Statements

 Arrays and Loop Statements

 Data Type Features

Floating-Point Data Types

 Precision of Floating-Point Data Types

 Precision of Floating-Point Data Types - Test

 Performance of Floating-Point Data Types

 Performance of Floating-Point Data Types - Test

 IEEE 754 Standards - "float" and "double"

 IEEE 754 Standards - "float" and "double" - Test

 Binary Representation of "decimal"

Accuracy of "decimal" Data Type

 Passing Parameters to Methods

 Execution Environment Class

 Visual C# 2010 Express Edition

 Class Features

 C# Compiler and Intermediate Language

 Compiling C# Source Code Files

 MSBuild - Microsoft Build Engine

 Memory Usages of Processes

 Multithreading in C#

 Async Feature from C# 5

 System.IO.FileInfo Class

 System.Diagnostics.FileVersionInfo Class

 WPF - Windows Presentation Foundation

 Partial Classes and Partial Methods

 Archived Tutorials

 References

 Full Version in PDF/ePUB