Francisco Leon projectileman at yahoo.com
Mon Nov 27 08:02:46 MST 2006

ODE uses macros widely for vector operations, so it
could take advantage of SIMD instructions implicity.

Modern compilers translate macros to SIMD instructions
by default, if you configure the project for
generating MMX/Pentium 3 instructions. 

For example, you can configure MingW compiler for
generate machine code for Pentium3 or Athlon
plattforms, wich could result in fast code as the
compiler decides where to put SIMD instructions in the

In my own benchmarking I've discovered that a vector
library that uses c++ templates (Like Blitz) is often
faster than those which have SIMD assembler
instructions (Like NVEC or SIMDx86 libraries).

So I think that it would better to leave the low-level
optimizations to the compiler. Microsoft people are
wise enough for putting all necessary gadgets into
their C++ compiler. Just Trust in your compiler, it is
more intelligent than You!! ;)

One way for optimizing ODE is reducing the use of
linked lists, and pack structures in vectors. That
will helps the CPU work because it reduces cache

Also we can optimize ODE by using alternative math
functions for Sqrt and trigonometry functions. Making
Sine and Cosine tables and using the GIMPACT SQRT
function would be helpful.

--- Hristo Hristov <hhristov at delin.org> wrote:

> Well, i'm afraid i'm not too deep in ODE source at
> all and probably i'll
> fail if i try to do that only by myself. It is good
> to make a plan about
> that i.e. what exactly is needed - ode math simd
> support only or more
> deep change. For collision libraries change i think
> it is better to ask
> the authors of GIMPACT and OPCODE ... or developers
> that knows exactly
> where we can speed up the stuff.
> Hristo.

"Technocracy Rules with Supremacy"
Visit http://gimpact.sourceforge.net

Yahoo! Music Unlimited
Access over 1 million songs.

More information about the ODE mailing list