[ODE] deterministic simulations

Jon Watte (ODE) hplus-ode at mindcontrol.org
Thu Jun 22 01:27:02 MST 2006



J. Eric Coleman wrote:
> How do you account for variances in the the internal floating point 
> accuracy of processors?  I thought Intel processors did 32bit floating 
> point math internally at 36bits (or something slightly larger than 
> 32).   And I'm pretty sure this is different from the way AMD performs 
> floating point math.
>   

The Intel FPU can do fp at "32" bits, "64" bits or "80" bits. The AMD 
FPU can do it at "32" or "64" bits. (I'm putting the bits in quotes, 
because it's really the size of the mantissa that counts).

By default, the Intel starts up in 80 bit mode, and AMD in 64 bit mode. 
You can change this mode with various user-mode CPU instructions, which 
are also wrapped by C runtime libraries in some compilers. (_controlfp() 
for example). Also, some libraries are known to change these modes for a 
running program (DirectX is a common culprit).

Thus, your best bet is to explicitly set the CPU precision and rounding 
mode to whatever you need, right before you start calling dCollide() or 
dWorldQuickStep(), each frame -- or, to save some stalls, test whether 
they're what you want, and only if not, set them.

However, this still doesn't solve the problem that the PPC FPU is a 
different implementation than the x87, and may possibly get a different 
result for the same input data, especially for things like trig and 
square roots.

Cheers,

             / h+



More information about the ODE mailing list