[ODE] Ideas for threading ODE...

Matthew Harmon matt at ev-interactive.com
Mon Aug 22 12:45:51 MST 2005


> If that FIFO gets full (i e, you push more data at the card than it can
handle) then the graphics driver will likely spin-wait for the queue to
drain enough to put in the next command.<

Yup...  and you wouldn't even want to get any further ahead in rendering as
it introduces an input-lag delay to those of us with cat-like reflexes.

>Why wouldn't the driver use an event/semaphore like a good process is 
supposed to? Because the scheduling jitter for events/semaphores is 
astronomical compared to typical graphics benchmark frame rates.<

Yeah... but I don't think they do this just for benchmarks - it's really the
philosophy.  It helpful to think of the graphics card as a separate computer
on a 100% reliable network.  For best frame-rate, you wouldn't wait for an
ack from the graphics card before you rendered the next frame... you'd just
want to keep telling it what to do as fast as you could.

And in the end it's really all about BALANCE.  Keeping all your "computers"
working when possible/needed.  This includes your CPUs, video card, hard
drives and network hardware.  (It's amazing how many people think they need
one thread per network connection when the hardware DMAs packets into RAM
without the need of the CPU anyway!)

Worse still, this is all complicated by the mismash of hardware in an end
user's computer.  For some games, the CPU practically doesn't matter if you
have a fast graphics card.  Throw physics into this, possibly on a dedicated
card, and it makes our jobs much more complex!



-----Original Message-----
From: ode-bounces at q12.org [mailto:ode-bounces at q12.org] On Behalf Of Jon
Watte (ODE)
Sent: Monday, August 22, 2005 11:51 AM
To: Jaroslav Sinecky
Cc: ode
Subject: Re: [ODE] Ideas for threading ODE...


> Has some else tried this? Is my assumption about CPU possible idle during
> rendering wrong?

Yes. "rendering" just means queuing data in a big FIFO memory area for 
the graphics card to pick up. If that FIFO gets full (i e, you push more 
data at the card than it can handle) then the graphics driver will 
likely spin-wait for the queue to drain enough to put in the next command.

Why wouldn't the driver use an event/semaphore like a good process is 
supposed to? Because the scheduling jitter for events/semaphores is 
astronomical compared to typical graphics benchmark frame rates. They 
get better frame rates (and thus reviwer scores) by spinning. In fact, 
if they were using kernel primitives to suspend the thread, the 
resulting jerkiness in frame rate might actually be noticeable by mere 
mortals (not just benchmarking programs) so I can understand their decision.

Cheers,

			/ h+

_______________________________________________
ODE mailing list
ODE at q12.org
http://q12.org/mailman/listinfo/ode






More information about the ODE mailing list