[ODE] LCP solution methods

David Black dblack at fastmail.fm
Wed Jan 5 21:17:59 MST 2005


Hi,

So what you are looking for is basically the holy grail of rigid body 
physics simulation? :-)

IIRC ODE uses either a dantzig based algorithm(step.cpp) or one based on 
successive over relaxation(quickstep.cpp?).
You can find a description in murtys(sp?) book on LCPs, which is 
available online somewhere(I could hunt out a link if you dont have it).

Anyway 10,000 constraints is an awful lot of tightly coupled objects, 
best examples I can think of are big heaps of things(eg sand in an hour 
glass). If this is what you need then maybe some sort of particle 
system, finite element or flexible body approach would be more appropriate?

Otherwise I would suggest looking at ways to reduce the problem size. 
For instance ode processes islands at a time I think. ie only forming 
the jacobian matrices in groups of say 50 bodies which are directly 
connected by joints.

Also if you cant take advantage of temporal coherance(warm starting), 
then things must be moving and bouncing around at a fair old whack. 
Maybe using an impulse based contact approach would be better?

Are most of your joints equality(eg ball and socket) or 
inequality(contact)? You may like to try warm starting your LCP solver 
with the results of a solver geared towards solving the equality 
constraints(and/or vice versa)...

I think the biggest performance gains are from:

* splitting the problem
* exploiting temporal coherance
* building a hybrid/iterative solver
* reducing the number of bodies or  joints(well duh, but if you can 
achieve the same effect with less).
* Sleeping/de-activating bodies which are not interacting.
* Designing the other parts to allow you to take much larger steps, 
while maintaining the illusion of interactivity
(who says every group of objects has to be simulated at the same rate?:-)
* robust/optimal collision system(3 contacts good, 20 bad...)

David

Sergiy Migdalskiy wrote:

> Hello:
>
> I'm working on an LCP solver, using JM^-1J^T method in Baraff's 
> terminology (the LCP w=Mx+q, w,x>=0, wx=0 with matrix M being non 
> singular and strictly positive definite). I have some pretty good 
> progress, but I'm still far from a high-quality physics package. I 
> wanted to ask for any suggestions how to implement iterative O(n) 
> expected running time solver.
>
> I'm looking at running up to 10000 contact problem in real time on the 
> modern high-end PC with low memory footprint (don't ask me why :)). 
> I'm pretty successful with this goal so far for frictionless balls :) 
> but I hope to make it work for everything. The collision detection is 
> not my focus now, as with this number of constraints the biggest 
> problem is collision response (that is, when I'm using something more 
> complicated than balls)
>
> The LCP matrix is pretty sparse, it's relatively easy to represent it. 
> But still, I have about 64000 float entries in the matrix itself, plus 
> control structures. In essence, it means I'm limited to 50-100 sweeps 
> through the matrix (matrix-vector multiplications). I tried 
> constructing no-fill Cholesky factors for the matrix, doesn't seem to 
> work well. Obviously I can't use any significant fill-in due to the 
> prohibitive size of the problem.
>
> For some simpler configurations (a pile of 1000 balls, tightly 
> packed), the matrix is "almost diagonally dominant" in some sense, 
> which means simple gauss-seidel iteration works just fine and I only 
> need 15-30 matrix-vector multiplications, and it works beautifully fast.
>
> For more complicated situations however, there are big off-diagonal 
> elements, which brings gauss-seidel relaxation scheme I'm using to 
> solve the LCP to its knees, just as the theory suggests. I tried block 
> Gauss-Seidel, but it has essentially the same problems with the 
> non-diagonally-dominant matrix.
>
> Can anyone suggest a way to precondition the JM'J'-type matrix so that 
> it becomes diagonal dominant? I really want to stick to a very simple 
> iterative method, since it works so fast and well for simple 
> configurations. Does anyone know any projected iteration methods, 
> besides projected SOR/Gauss-Seidel, which work for positive-definite 
> LCPs? I'd kill for a projective analog of Conjugate Gradient for LCP! 
> CG solves the analogous linear problem (just find x:Mx+q=0) in 30 
> steps even without preconditioning - and my budget is 50-100 steps.
>
> Has anyone tried to use multigrid/multilevel to solve contact forces? 
> The literature on multigrid for LCP is sparse to say the least, any 
> references would be very much appreciated.
>
> Another way is to use some simplex method. I'm not a specialist in 
> this, but to my best knowledge simplex methods (Lemke, criss-cross) 
> don't work on sparse matrices, because the matrix gets more and more 
> filled in during pivots. And memory is an issue for me. Am I missing 
> something here? Has anyone tried simplex methods to solve contact 
> forces in big systems (i.e. where it's impossible to fit the whole 
> dense LCP in memory)?
>
> Dantzig method requires keeping and updating a factor of variable size 
> (depending on the active basis). I think I heard ODE uses or used 
> this? Anyway, there's no way for me to keep a dense factor of a 
> submatrix of a 10000x10000 JM'J' matrix with a budget of 50-100 
> matrix-vector multiplications. Again, am I missing something? I didn't 
> ever programmed updating of a factor by adding/deleting a row/column, 
> but I did program a few sparse (incomplete) factors for this problem 
> and even that was a huge pain, since the quality of the preconditioner 
> falls dramatically as I try to shrink memory usage by raising drop 
> tolerances or implementing some moderate fill-in heuristics. In any 
> case, I suspect Dantzig method requires too many steps for a big 
> (10000 vars) LCP. Am I wrong?
>
> Another way to solve LCP is to use interior point methods. I 
> experimented with them using alternative formulations (QP with such 
> objective as energy minimization). It works pretty well, but again, I 
> need at least 10 outer-loop steps in bad cases, each step consisting 
> of forming a preconditioner and solving at least one or two linear 
> problems (e.g. for predictor-corrector type algorithm). I can only use 
> cheap preconditioners, and it means at least 20-30 steps (very 
> optimistically!) for each linear problem (I used MinRes, LSQR, QMR, 
> SQMR, CG, CGS to solve the linear problem, all of them with a few 
> different preconditioners - all just to make sure they work as 
> expected by theory; I used both the Augmented System and Normal 
> Equations approaches; AS worked better for me). So, in total I'm 
> looking at least at 400 matrix-vector computations, which is out of 
> budget. Besides, in reality, it'll be closer to 1200 due to 
> ill-conditioned systems, especially closer to smaller complementarity 
> gaps when the interior point converges to the solution. Plus the time 
> to construct each preconditioner..
>
> Oh yeah, to top it all, the nature of the engine is quite dynamic, and 
> usage of warm-start (using the solution from the previous frame) is 
> hardly possible to the extent that can make a big difference...
>
> I'm starting to run out of sane ideas... Any suggestions would be 
> greatly appreciated.
>
> Sincerely,
> Sergiy _______________________________________________
> ODE mailing list
> ODE at q12.org
> http://q12.org/mailman/listinfo/ode




More information about the ODE mailing list