Skip to content

ldctbench.evaluate.utils

compute_metric(targets, predictions, metrics, denormalize_fn=None, exam_type=None, ldct_iqa=None)

Compute metrics for given predictions and targets

Parameters:

  • targets (Union[Tensor, ndarray]) –

    Tensor or ndarray of shape (mbs, 1, H, W) holding ground truth

  • predictions (Union[Tensor, ndarray]) –

    Tensor or ndarray of shape (mbs, 1, H, W) holding predictions

  • metrics (List[str]) –

    List of metrics must be "VIF" | "PSNR" | "RMSE" | "SSIM" | "LDCTIQA"

  • denormalize_fn (Optional[Callable], default: None ) –

    Function to use for denormalizing, by default None

  • exam_type (Optional[str], default: None ) –

    Exam type (for computing SSIM and PSNR on windowed images), by default None

  • ldct_iqa (Optional[LDCTIQA], default: None ) –

    LDCTIQA object for computing LDCTIQA score, by default None

Returns:

  • Dict[str, List]

    Dictionary containing for each metric a list of len(mbs)

Raises:

  • ValueError

    If predictions.shape != targets.shape

  • ValueError

    If element in metric is not "VIF" | "PSNR" | "RMSE" | "SSIM" | "LDCTIQA"

denormalize(x, method=None, normalization=None)

Denormalize tensor or ndarray based on normalization type of trained model.

Parameters:

  • x (Union[Tensor, ndarray]) –

    Tensor or ndarray to normalize

  • method (Union[Literal[RESNET, CNN10, DUGAN, QAE, REDCNN, TRANSCT, WGANVGG, BILATERAL], str, None], default: None ) –

    Enum item or string, specifying model to dermine which normalization to use. See ldctbench.hub.methods.Methods for more info, by default None

  • normalization (Optional[str], default: None ) –

    Normalization method, must be meanstd | minmax, by default None

Returns:

  • Union[Tensor, ndarray]

    Normalized tensor or ndarray

Raises:

  • ValueError

    If normalization is neither "meanstd" nor "minmax"

normalize(x, method=None, normalization=None)

Normalize tensor or ndarray based on normalization type of trained model.

Parameters:

  • x (Union[Tensor, ndarray]) –

    Tensor or ndarray to normalize

  • method (Union[Literal[RESNET, CNN10, DUGAN, QAE, REDCNN, TRANSCT, WGANVGG, BILATERAL], str, None], default: None ) –

    Enum item or string, specifying model to dermine which normalization to use. See ldctbench.hub.methods.Methods for more info, by default None

  • normalization (Optional[str], default: None ) –

    Normalization method, must be meanstd | minmax, by default None

Returns:

  • Union[Tensor, ndarray]

    Normalized tensor or ndarray

Raises:

  • ValueError

    If normalization is neither "meanstd" nor "minmax"

preprocess(x, **normalization_kwargs)

Preprocess input tensor

Parameters:

  • x (Union[Tensor, ndarray]) –

    Input tensor or ndarray

Returns:

  • Tensor

    Preprocessed tensor

setup_trained_model(run_name, device=torch.device('cuda'), network_name='Model', state_dict=None, return_args=False, return_model=True, eval=True, **model_kwargs)

Setup a trained model with run in ./wandb

Parameters:

  • run_name (str) –

    Name of run (is the same as foldername)

  • device (device, default: device('cuda') ) –

    Device to move model to, by default torch.device("cuda")

  • network_name (str, default: 'Model' ) –

    Class name of network, by default "Model"

  • state_dict (str, default: None ) –

    Name of state_dict. If None, model is initialized with random parameters, by default None

  • return_args (bool, default: False ) –

    Return args of training run, by default False

  • return_model (bool, default: True ) –

    Return model, by default True

  • eval (bool, default: True ) –

    Set model to eval mode, by default True

Returns:

  • Union[Tuple[Module, Namespace], Module, Namespace]

    Either model, args or model and args of training run

vif(x, y, sigma_n_sq=2.0)

Compute visual information fidelity