Model Layers¶
These are standard layers useful for creating molecular neural networks. Only some are detailed here. See Layers for complete list.
WCARepulsion¶
The WCARepulsion
layer can be used to add a trainable
repulsion. Be careful to choose the staring sigma
to be small enough that
there will not be large gradients at the start of training. A regularization
term is added to push sigma
to more positive, otherwise it will just
float away from mattering during training. This can be removed.
class WCA(htf.SimModel):
def setup(self):
self.wca = htf.WCARepulsion(0.5)
def compute(self, nlist):
energy = self.wca(nlist)
forces = htf.compute_nlist_forces(nlist, energy)
return forces
Biasing with EDS¶
To apply Experiment Directed
Simulation
biasing to a system, use an EDS Layer (EDSLayer
):
class EDSModel(htf.SimModel):
def setup(self):
self.cv_avg = tf.keras.metrics.Mean()
self.eds_bias = htf.EDSLayer(4., 5, 1/5)
def compute(self, nlist, positions, box):
# get distance from center
rvec = htf.wrap_vector(positions[0, :3], box)
# compute CV
cv = tf.norm(tensor=rvec)
self.cv_avg.update_state(cv)
alpha = self.eds_bias(cv)
# eds energy
energy = cv * alpha
forces = htf.compute_positions_forces(positions, energy)
return forces, alpha
Here,
EDSLayer.update_state
returns the lagrange multiplier/eds coupling that
is used to bias the simulation.