ldctbench.utils.training_utils
PerceptualLoss(network, device, in_ch=3, layers=[3, 8, 15, 22], norm='l1', return_features=False)
Bases: Module
The layers argument defines where to extract the activations. In the default paper, style losses are computed
at: 3: "relu1_2", 8: "relu2_2", 15: "relu3_3", 22: "relu4_3" and perceptual (content) loss is evaluated at: 15: "relu3_3". In1 the content is evaluated in vgg19 after the 16th (last) conv layer (layer 34)
-
Q. Yang et al., “Low-dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1348–1357, Jun. 2018. ↩
Parameters:
-
network(str) –Which VGG flavor to use, must be "vgg16" or "vgg19"
-
device(device) –Torch device to use
-
in_ch(int, default:3) –Number of input channels, by default 3
-
layers(List[int], default:[3, 8, 15, 22]) –Number of layers at which to extract features, by default [3, 8, 15, 22]
-
norm(str, default:'l1') –Pixelwise, must be "l1" or "mse", by default "l1"
-
return_features(bool, default:False) –description, by default False
Raises:
-
ValueError–normis neither "l1" nor "mse".
repeat_ch(in_ch)
Bases: object
Class to repeat input 3 times in channel dimension if in_ch == 1
Parameters:
-
in_ch(int) –Number of input channels.
setup_dataloader(args, datasets)
Returns dict of dataloaders
Parameters:
-
args(Namespace) –Command line arguments
-
datasets(Dict[str, Dataset]) –Dictionary of datasets for each phase.
Returns:
-
Dict[str, DataLoader]–Dictionray of dataloaders for each phase.
setup_optimizer(args, parameters)
Setup optimizer for given model parameters
Parameters:
-
args(Namespace) –Command line arguments
-
parameters(Iterator[Parameter]) –Parameters to be optimized. For some
model: nn.Modulethese can be received viamodel.parameters()
Returns:
-
Optimizer–Optimizer for the given parameters
Raises:
-
ValueError–If args.optimizer is not in "sgd" | "adam" | "rmsprop"