[ODE] Floating point error propagation

Sean R. Lynch seanl at chaosring.org
Wed May 21 22:21:01 2003


--=-SNQjCg9wsFNfmJyBJnfG
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable

On Wed, 2003-05-21 at 21:53, Jeff Shim wrote:
> In the case of recursive neural network, signals are recursively
> multiplied or summed for more than thousands times so the errors are
> propagated at last.
>=20
> Although I used double precision, I could not get exact result again.
>=20
> Is there any methods or options to avoid this?
>=20
> Maybe it would be the essential problem of FPU.

What sort of neural network requires exact results?

Also, you should look into support vector machines, as they are well
understood and well characterised, as opposed to neural networks, which
are used primarily because they look kinda similar to how the brain
might work :)

--=20
If this helped you, please take the time to rate the value of this post:
<http://svcs.affero.net/rm.php?r=3Dkg6cvv>

--=-SNQjCg9wsFNfmJyBJnfG
Content-Type: application/pgp-signature; name=signature.asc
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)

iD8DBQA+zF5LmvPkVW5ztpERAuGGAJ4tLljYu3+f7oKOQtfIFBhzKkgDNQCfVAJM
rdkskdMHXEkZHB0Z6D/5w3Y=
=JviN
-----END PGP SIGNATURE-----

--=-SNQjCg9wsFNfmJyBJnfG--