gninatorch.models module
GNINA Caffe models translated to PyTorch.
Notes
The PyTorch models try to follow the original Caffe models as much as possible. However, some changes are necessary.
Notable differences:
* The MolDataLayer
is now separated from the model and the parameters are
controlled by CLI arguments in the training process.
* The model output for pose prediction corresponds to the log softmax of the last fully-
connected layer instead of the softmax.
- class gninatorch.models.Default2017(input_dims: Tuple)[source]
Bases:
Module
GNINA default2017 model architecture.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax.
- forward(x: Tensor)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class gninatorch.models.Default2017Affinity(input_dims: Tuple)[source]
Bases:
Default2017Pose
GNINA default2017 model architecture for pose and affinity prediction.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax of the final linear layer instead of feeding it to a
SoftmaxWithLoss
layer.- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and affinity prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Default2017Flex(input_dims: Tuple)[source]
Bases:
Default2017
GNINA default2017 model architecture for multi-task pose prediction (ligand and flexible residues).
Poses are annotated based on both ligand RMSD and flexible residues RMSD (w.r.t. the cognate receptor in the case of cross-docking).
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and flexible residues pose prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
- Returns
Log probabilities for ligand pose and flexible residues pose prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Default2017Pose(input_dims: Tuple)[source]
Bases:
Default2017
GNINA default2017 model architecture for pose prediction.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax of the final linear layer instead of feeding it to a
SoftmaxWithLoss
layer.- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose
- Return type
torch.Tensor
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Default2018(input_dims: Tuple)[source]
Bases:
Module
GNINA default2017 model architecture.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax.
- training: bool
- class gninatorch.models.Default2018Affinity(input_dims: Tuple)[source]
Bases:
Default2018Pose
GNINA default2017 model architecture for pose and affinity prediction.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax.
- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and affinity prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Default2018Flex(input_dims: Tuple)[source]
Bases:
Default2018
GNINA default2017 model architecture for multi-task pose prediction (ligand and flexible residues).
Poses are annotated based on both ligand RMSD and flexible residues RMSD (w.r.t. the cognate receptor in the case of cross-docking).
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and flexible residues pose prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Default2018Pose(input_dims: Tuple)[source]
Bases:
Default2018
GNINA default2017 model architecture for pose prediction.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax.
- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose
- Return type
torch.Tensor
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Dense(input_dims: Tuple, num_blocks: int = 3, num_block_features: int = 16, num_block_convs: int = 4, affinity: bool = True)[source]
Bases:
Module
GNINA Dense model architecture.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
num_blocks (int) – Number of dense blocks
num_block_features (int) – Number of features in dense block convolutions
num_block_convs” int – Number of convolutions in dense block
Notes
Original implementation by Andrew McNutt available here:
The main difference is that the original implementation returns the raw output of the last linear layer while here the output is the log softmax of the last linear.
- forward(x)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Raises
NotImplementedError –
Notes
The forward pass needs to be implemented in derived classes.
- training: bool
- class gninatorch.models.DenseAffinity(input_dims: Tuple, num_blocks: int = 3, num_block_features: int = 16, num_block_convs: int = 4)[source]
Bases:
DensePose
GNINA Dense model architecture for binding affinity prediction.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
num_blocks (int) – Number of dense blocks
num_block_features (int) – Number of features in dense block convolutions
num_block_convs” int – Number of convolutions in dense block
Notes
Original implementation by Andrew McNutt available here:
The main difference is that the original implementation resurns the raw output of the last linear layer while here the output is the log softmax of the last linear.
- forward(x)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and affinity prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.DenseBlock(in_features: int, num_block_features: int = 16, num_block_convs: int = 4, tag: Union[int, str] = '')[source]
Bases:
Module
DenseBlock for Dense model.
- Parameters
in_features (int) – Input features for the first layer
num_block_features (int) – Number of output features (channels) for the convolutional layers
num_block_convs (int) – Number of convolutions
tag (Union[int, str]) – Tag identifying the DenseBlock
Notes
The total number of output features corresponds to the input features concatenated together with all subsequent
num_block_features
produced by the convolutional layers (num_block_convs
times).- forward(x: Tensor) Tensor [source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Output tensor
- Return type
torch.Tensor
- training: bool
- class gninatorch.models.DenseFlex(input_dims: Tuple, num_blocks: int = 3, num_block_features: int = 16, num_block_convs: int = 4)[source]
Bases:
Dense
GNINA dense model architecture for multi-task pose prediction (ligand and flexible residues).
Poses are annotated based on both ligand RMSD and flexible residues RMSD (w.r.t. the cognate receptor in the case of cross-docking).
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
num_blocks (int) – Number of dense blocks
num_block_features (int) – Number of features in dense block convolutions
num_block_convs” int – Number of convolutions in dense block
Notes
Original implementation by Andrew McNutt available here:
The main difference is that the original implementation resurns the raw output of the last linear layer while here the output is the log softmax of the last linear.
- forward(x)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and flexible residues pose prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.DensePose(input_dims: Tuple, num_blocks: int = 3, num_block_features: int = 16, num_block_convs: int = 4)[source]
Bases:
Dense
GNINA Dense model architecture for pose prediction.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
num_blocks (int) – Number of dense blocks
num_block_features (int) – Number of features in dense block convolutions
num_block_convs” int – Number of convolutions in dense block
Notes
Original implementation by Andrew McNutt available here:
The main difference is that the original implementation resurns the raw output of the last linear layer while here the output is the log softmax of the last linear.
- forward(x)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose
- Return type
torch.Tensor
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.GNINAModelEnsemble(models: List[Module])[source]
Bases:
Module
Ensemble of GNINA models.
- Parameters
models (List[nn.Module]) – List of models to use in the ensemble
Notes
Assume models perform only pose AND affinity prediction.
Modules are stored in
nn.ModuleList
so that they are properly registered.- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Logarithm of the pose score, affinity prediction (average) and affinity variance
- Return type
Tuple[torch.tensor, torch.tensor, torch.tensor],
Notes
For pose prediction, the average has to be performed on the scores, not theeir logarithm (returned by the model). In order to be consistent with everywhere else (where the logarighm of the prediction is returned), here we compute the score (by exponentating), compute the average, and finally return the logarithm of the computed average.
- training: bool
- class gninatorch.models.HiResAffinity(input_dims: Tuple)[source]
Bases:
Module
GNINA HiResAffinity model architecture.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax.
This model is implemented only for multi-task pose and affinity prediction.
- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and affinity prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.HiResPose(input_dims: Tuple)[source]
Bases:
Module
GNINA HiResPose model architecture.
- Parameters
input_dims (tuple) – Model input dimensions (channels, depth, height, width)
Notes
This architecture was translated from the following Caffe model:
The main difference is that the PyTorch implementation resurns the log softmax.
This model is implemented only for multi-task pose and affinity prediction.
- forward(x: Tensor)[source]
- Parameters
x (torch.Tensor) – Input tensor
- Returns
Log probabilities for ligand pose and affinity prediction
- Return type
Tuple[torch.Tensor, torch.Tensor]
Notes
The pose score is the log softmax of the output of the last linear layer.
- training: bool
- class gninatorch.models.Model(model, affinity, flex)
Bases:
tuple
- affinity
Alias for field number 1
- flex
Alias for field number 2
- model
Alias for field number 0
- gninatorch.models.weights_and_biases_init(m: Module) None [source]
Initialize the weights and biases of the model.
- Parameters
m (nn.Module) – Module (layer) to initialize
Notes
This function is used to initialize the weights of the model for both convolutional and linear layers. Weights are initialized using uniform Xavier initialization while biases are set to zero.