# Known Issues¶

The following is a list of known issues. To report another issue, please use the issue tracker.

## Using Positions¶

Hoomd re-orders positions to improve performance. If you are using CG mappings that rely on ordering of positions, be sure to disable this:

c = hoomd.context.initialize()
c.sorter.disable()


There is a bug in norms (https://github.com/tensorflow/tensorflow/issues/12071) that somtimes prevents optimizers to work well with TensorFlow norms. Note that this is only necessary if you’re summing up gradients, like what is commonly done in computing gradients in optimizers. This isn’t usually an issue for just computing forces. There are three ways to deal with this:

### Small Training Rates¶

When Training something like a Lennard-Jones potential or other 1/r potential, high gradients are possible. You can prevent expoding gradients by using small learning rates and ensuring variables are initialized so that energies are finite.

### Safe Norm¶

There is a workaround (safe_norm()) in HOOMD-TF. There is almost no performance penalty, so it is fine to replace tf.norm with safe_norm() throughout. This method adds a small amount to all the norms though, so if you rely on some norms being zero it will not work well.

optimizer = tf.train.AdamOptimizer(1e-4)