[ODE] Allocation problem?
gcarlton at iinet.net.au
Wed Jun 7 20:51:34 MST 2006
Well, as Jonathan Klein replied, looks like I was wrong about the need
for a FREEA.
In terms of reclaiming memory, I'd prefer any solution be able to be
completely cleaned via dCloseODE() rather than waiting until atexit.
This would help for leak checking and also so an app can get all its
memory back if it needs it.
I don't know whether ODE can step different worlds in different threads
currently, but having statics precludes this. Any buffers could be
attached to the world struct, which lends itself well to having some
extra control functions per world (setting block size, condensing
blocks, querying total grown size, etc). Though for single-threaded
multi-world simulations, it is unnecessarily wasteful.
Jon Watte (ODE) wrote:
> You can absolutely implement dALLOCA without leaking. The easiest
> thing to do is to re-use the existing blocks, and only grow when
> needed -- this doesn't "leak" although it may lead to "growing store"
> problems (it will also need an atexit() to free the data to avoid
> confusing leak checkers).
> dFREEA is not needed, and should not be added, because it would be a no-
> op, so code paths would get in there that didn't do the right thing in
> exceptional cases and nobody would know.
> The thing you need to do to make sure that a dALLOCA replacement works
> is to figure out when it's safe to clean up the previous allocations
> and re-use the pointer. At the entry of all dWorld step functions
> seems like as good a place as any.
> / h+
> Geoff Carlton wrote:
>> It would be interesting to revisit this after 0.6, and fix it once
>> and for all.
>> I had a look through the previous postings . The posted patch, which
>> mallocs if "full" would appear to leak memory, unless of course FREEA
>> was added for every ALLOCA call. In general that sort of solution
>> looks the right way to go though.
>> jon klein wrote:
>>> On Jun 7, 2006, at 12:28 PM, Jon Watte (ODE) wrote:
>>>> This already got implemented once AFAICR. There was a big
>>>> discussion WRT
>>>> the overhead of malloc() vs the overhead of a linear allocator vs
>>>> whether there should be one big block or on-demand allocated small
>>>> blocks, vs static block re-use, ...
>>>> Thinking about it: did that ever actually make it into the
>>>> codebase? If
>>>> it did, then the dALLOC failing could be because of some heap
>>>> corruption, instead of running out of stack.
>>> I believe this issue has been coming up over and over again, literally
>>> for a couple of years, and I believe a fix is still not in the code
>>> I've been using a modified version of the patch outlined here:
>>> My modification is shown here (in the quoted text):
>>> Personally, I feel that fixing this is extremely important. Using
>>> alloca, even with an increased stack size, adding objects into a
>>> simulation will eventually cause ODE to crash. Though this may work
>>> fine for applications in which the user knows ahead of time how many
>>> objects can be created, it's really not acceptable for an application
>>> which allows runtime creation of arbitrary objects (which is what I
>>> used ODE for -- http://www.spiderland.org/breve ).
>>> - jon klein
>>> No virus found in this incoming message.
>>> Checked by AVG Free Edition.
>>> Version: 7.1.394 / Virus Database: 268.8.3/358 - Release Date:
More information about the ODE