The following is a list of known issues. To report another issue, please use the issue tracker.
Hoomd re-orders positions to improve performance. If you are using CG mappings that rely on ordering of positions, be sure to disable this:
c = hoomd.context.initialize() c.sorter.disable()
There is a bug in norms (https://github.com/tensorflow/tensorflow/issues/12071) that somtimes prevents optimizers to work well with TensorFlow norms. Note that this is only necessary if you’re summing up gradients, like what is commonly done in computing gradients in optimizers. This isn’t usually an issue for just computing forces. There are three ways to deal with this:
Small Training Rates¶
When Training something like a Lennard-Jones potential or other
potential, high gradients are possible. You can prevent expoding
gradients by using small learning rates and ensuring variables are
initialized so that energies are finite.
There is a workaround (
safe_norm()) in HOOMD-TF. There
is almost no performance penalty, so it is fine to replace
safe_norm() throughout. This method adds a small
amount to all the norms though, so if you rely on some norms being zero
it will not work well.
Another approach is to clip gradients instead of using safe_norm:
optimizer = tf.train.AdamOptimizer(1e-4) gvs = optimizer.compute_gradients(cost) capped_gvs = [(tf.clip_by_norm(grad, 1.0), var) for grad, var in gvs] train_op = optimizer.apply_gradients(capped_gvs)
Using a max-size neighbor list is non-ideal, especially in CG simulations where density is non-uniform.