Public API Documentation

Subpackages

Submodules

tsunami_ip_utils.comparisons module

Tools for generating comparisons between TSUNAMI-IP calculated integral parameters (e.g. \(c_k\), \(E\)) and calculated values using correlation methods with either cross section sampling or uncertainty contributions.

tsunami_ip_utils.comparisons.E_calculation_comparison(application_filenames, experiment_filenames, coverx_library='252groupcov7.1', tsunami_ip_output_filename=None)[source]

Function that compares the calculated similarity parameter E with the TSUNAMI-IP output for each application with each experiment. The comparison is done for the nominal values and the uncertainties of the E values. In addition, the difference between manually calculated uncertainties and automatically calculated uncertainties (i.e. via the uncertainties package) is also calculated. The results are returned as a pandas DataFrame.

Parameters:
  • application_filenames (Union[List[str], List[Path]]) – Paths to the application sdf files.

  • experiment_filenames (Union[List[str], List[Path]]) – Paths to the experiment sdf files.

  • coverx – The coverx library to use for TSUNAMI-IP. Default is '252groupcov7.1'.

  • tsunami_ip_output_filename (Union[str, Path, None]) – Optional path to the TSUNAMI-IP output file, if not specified, the function will run TSUNAMI-IP to calculate the E values using the template file Multigroup SCALE XS Library Reader.

Return type:

Dict[str, DataFrame]

Returns:

Dictionary of pandas DataFrames for each type of E index. The keys are 'total', 'fission', 'capture', and 'scatter'. The DataFrames contain the calculated E values, the manual uncertainties, the TSUNAMI-IP values, the relative difference in the mean, and the relative difference in the manually computed uncertainty. The DataFrames are indexed by the experiment number and the columns are a MultiIndex with the application number as the main index and the attributes as the subindex.

Notes

Each of the results dataframes in the dictionary can easily be written to excel using the pandas to_excel method.

tsunami_ip_utils.comparisons.correlation_comparison(integral_index_matrix, integral_index_name, application_files, experiment_files, method, base_library=None, perturbation_factors=None, num_perturbations=None, make_plot=True, num_cores=10, plot_objects_kwargs={}, matrix_plot_kwargs={})[source]

Function that compares the calculated similarity parameter \(c_k\) (calculated using the cross section sampling method) with the TSUNAMI-IP output for each application and each experiment. NOTE: that the experiment sdfs and application sdfs must correspond with those in hte TSUNAMI-IP input file.

Notes

  • If the chosen method is ‘perturbation’, the matrix plot can become extremely memory intensive, so it is recommended to set make_plot=False (if only the matrix of comparisons is desired) and/or to use num_cores=1 or a small number of perturbations to avoid memory issues.

Parameters:
  • integral_index_matrix (uarray) – The matrix representing the given integral index. Expected shape is (num_applications, num_experiments).

  • integral_index_name (str) – The name of the integral index (used for selecting the method for the plot). Allowed values of 'c_k', 'E'.

  • application_files (Union[List[str], List[Path]]) – Paths to the input files for the application (required by the chosen method, either TSUNAMI .out files or TSUNAMI .sdf files).

  • experiment_files (Union[List[str], List[Path]]) – Paths to the input files for the experiment (required by the chosen method, either TSUNAMI .out files or TSUNAMI .sdf files).

  • method (str) – The method for visualizing the given integral index. Allowed values of 'perturbation', 'uncertainty_contributions_nuclide', 'uncertainty_contributions_nuclide_reaction', 'variance_contributions_nuclide', 'variance_contributions_nuclide_reaction'. 'E_contributions_nuclide', 'E_contributions_nuclide_reaction', 'c_k_contributions',

  • base_library (Union[str, Path, None]) – Path to the base library

  • perturbation_factors (Union[str, Path, None]) – Path to the perturbation factors.

  • num_perturbations (Optional[int]) – Number of perturbations to generate.

  • make_plot (bool) – Whether to generate the matrix plot. Default is True.

  • num_cores (int) – If make_plot is False, the number of cores to use for multiprocessing. Default is two less than the number of available cores.

  • plot_objects_kwargs (dict) – Optional keyword arguments to pass when generating the plot objects.

  • matrix_plot_kwargs (dict) – Optional keyword arguments to pass when generating the matrix plot.

Return type:

Tuple[DataFrame, Any]

Returns:

  • comparisons

    A pandas DataFrame containing the calculated integral index values, the TSUNAMI-IP values, and the percent difference between the two values.

  • matrix_plot

    If make_plot=True, the matrix plot object containing the integral index values and the percent difference, otherwise the output is just comparisons.

tsunami_ip_utils.config module

A package level configuration module.

tsunami_ip_utils.config.NUM_SAMPLES = 1000

Number of cross section perturbation factors available in SCALE.

tsunami_ip_utils.config.SDF_DATA_NAMES = ['isotope', 'reaction_type', 'zaid', 'reaction_mt', 'zone_number', 'zone_volume', 'energy_integrated_sensitivity', 'abs_sum_groupwise_sensitivities', 'sum_opposite_sign_groupwise_sensitivities', 'sensitivities', 'uncertainties']

Names of the data fields parsed by the SDF reader for TSUNAMI-B formatted SDF files.

tsunami_ip_utils.config.COMPARISON_HEATMAP_LABELS = {'Calculated': 'Calculated Integral Index', 'Percent Difference': 'Percent Difference (%)', 'TSUNAMI-IP': 'TSUNAMI-IP Integral Index'}

Labels for comparison heatmaps.

tsunami_ip_utils.config.generating_docs = True

Whether or not to kill interactive legend (flask/dash applications) plots after a short amount of time. This is not intended for use by users, but is necessary for generating documentation properly.

tsunami_ip_utils.config.cache_dir = PosixPath('/home/mlouis9/tsunami_ip_utils/docs/source/../../examples/data/cached_xs_data')

Directory to store cached cross section libraries and perturbations. This is also where the package will look for already cached data, so be sure it corresponds with where your cached data actually is, if you have manually changed this.

tsunami_ip_utils.integral_indices module

tsunami_ip_utils.integral_indices.calculate_E(application_filenames, experiment_filenames, reaction_type='all', uncertainties='manual')[source]

Calculates the similarity parameter, E for each application with each available experiment given the application and experiment sdf files

Parameters:
  • application_filenames (Union[List[str], List[Path]]) – Paths to the application sdf files.

  • experiment_filenames (Union[List[str], List[Path]]) – Paths to the experiment sdf files.

  • reaction_type (str) – The type of reaction to consider in the calculation of E. Default is 'all’ which considers all reactions.

  • uncertainties (str) – The type of uncertainty propagation to use. Default is 'automatic' which uses the uncertainties package for error propagation. If set to 'manual', then manual error propagation is used.

Return type:

ndarray

Returns:

Similarity parameter for each application with each experiment, shape: (len(application_filenames), len(experiment_filenames))

tsunami_ip_utils.integral_indices.calculate_E_contributions(application_filenames, experiment_filenames)[source]

Calculates the contributions to the similarity parameter E for each application with each available experiment on a nuclide basis and on a nuclide-reaction basis.

Parameters:
  • application_filenames (List[str]) – Paths to the application sdf files.

  • experiment_filenames (List[str]) – Paths to the experiment sdf files.

Return type:

Tuple[Dict[str, uarray], Dict[str, uarray]]

Returns:

  • E_contributions_nuclide

    Contributions to the similarity parameter E for each application with each experiment on a nuclide basis.

  • E_contributions_nuclide_reaction

    Contributions to the similarity parameter E for each application with each experiment on a nuclide-reaction basis.

tsunami_ip_utils.integral_indices.get_uncertainty_contributions(application_filenames=None, experiment_filenames=None, variance=False)[source]

Read the contributions to the uncertainty in \(k_{\text{eff}}\) (i.e. \(\frac{dk}{k}\)) for each application and each available experiment on a nuclide basis and on a nuclide-reaction basis from the provided TSUNAMI-IP .out or .sdf files.

Parameters:
  • application_filenames (Union[List[str], List[Path], None]) – (Optional) Paths to the application output (.out) or .sdf files.

  • experiment_filenames (Union[List[str], List[Path], None]) – (Optional) Paths to the experiment output (.out) or .sdf files.

  • variance (bool) – If the contributions to the nuclear data induced variance should be returned, default is False.

Return type:

Tuple[Dict[str, List[uarray]], Dict[str, List[uarray]]]

Returns:

  • uncertainty_contributions_nuclide

    List of contributions to the uncertainty in \(k_{\text{eff}}\) for each application and each experiment on a nuclide basis. Keyed by 'application' and 'experiment'.

  • uncertainty_contributions_nuclide_reaction

    List of contributions to the uncertainty in \(k_{\text{eff}}\) for each application and each experiment on a nuclide-reaction basis. Keyed by 'application' and 'experiment'.

Notes

If either the application or experiment filenames are not provided, then the corresponding output will be an empty list.

Theory

The nuclear-data induced varaince in \(k_{\text{eff}}\) (defined in Equation 6.3.34 ) can be decomposed into contributions from each nulicde-reaction covariance via Equation 6.3.35 in the SALE manual. The total uncertainty in \(k_{\text{eff}}\) (as well as the contributions on a nuclide-reaction wise basis) can be calculated from these two definitions by simply taking the square root. For nuclide-reaction covariances that are not principle submatrices, the contribution to the variance may be negative (as they are not guaranteed to be positive definite), and so the uncertainty contribution may be (formally) imaginary. However, these contributions physically represent anticorrelations in the nuclear data, and so TSUNAMI reports them as negative values, but with a note that there is a special rule for handling these values (see the footer of the Uncertainty Information section in Example 6.6.3 ).

tsunami_ip_utils.integral_indices.get_integral_indices(application_sdfs, experiment_sdfs, coverx_library='252groupcov7.1')[source]

Gets the TSUNAMI-IP computed integral indices for a set of application and experiment sdfs

Parameters:
  • application_sdfs (Union[List[str], List[Path]]) – List of paths to the application SDF files.

  • experiment_sdfs (Union[List[str], List[Path]]) – List of paths to the experiment SDF files.

  • coverx_library – The covariance library to use. Default is '252groupcov7.1'. This must be explicitly specified, and must correspond to the multigroup library used to generate the SDF files.

Return type:

Dict[str, uarray]

Returns:

Tuple of dictionaries containing the integral indices for the applications and experiments. The dictionaries have keys: 'c_k', 'E_total', 'E_fission', 'E_capture', and 'E_scatter'.

tsunami_ip_utils.perturbations module

This module is used for generating cross section perturbations and combining them with the sensitivity profiles for a given application experiment pair to generate a similarity scatter plot

tsunami_ip_utils.perturbations.generate_points(application_path, experiment_path, base_library, perturbation_factors, num_perturbations)[source]

Generates points for a similarity scatter plot using the nuclear data sampling method.

Parameters:
  • application_path (Union[Path, List[Path]]) – Path(s) to the application sensitivity profile.

  • experiment_path (Union[Path, List[Path]]) – Path(s) to the experiment sensitivity profile.

  • base_library (Union[str, Path]) – Path to the base cross section library.

  • perturbation_factors (Union[str, Path]) – Path to the perturbation factors directory.

  • num_perturbations (int) – Number of perturbation points to generate.

Return type:

Union[List[Tuple[float, float]], ndarray[List[Tuple[float, float]]]]

Returns:

A list of points for the similarity scatter plot.

Notes

  • This function will automatically cache the base cross section library and the perturbed cross section libraries in the user’s home directory under the .tsunami_ip_utils_cache directory if not already cached. Caching is recommended if perturbation points are to be generated multiple times, because the I/O overhead of dumping and reading the base and perturbed cross section libraries can be significant.

  • This function can also generate a matrix of points for a given set of experiment and applications for making a matrix plot done by passing a list of paths for the application and experiment sensitivity profiles.

Theory

The nuclear data sampling method involves randomly sampling cross section libraries using the AMPX tool clarolplus to calculate a perturbed cross section library \(\Delta \boldsymbol{\sigma}_n = \overline{\boldsymbol{\sigma}} - \boldsymbol{\sigma}_n\), where \(\boldsymbol{\sigma}_n\) is the \(n\) th randomly sampled cross section library, and \(\overline{\boldsymbol{\sigma}}\) is the base cross section library, consisting of the mean values of all of the cross sections (e.g. the SCALE 252-group ENDF-V7.1 library). This perturbed cross section library (a vector consisting of the nuclide-reaction-group-wise perturbations to the cross sections) is dotted with the sensitivity vector for the application: \(x_n = \boldsymbol{S}_A \cdot \Delta \boldsymbol{\sigma}\), and the experiment \(y_n = \boldsymbol{S}_A \cdot \Delta \boldsymbol{\sigma}\), and the resulting points \((x_n, y_n)\) are plotted on a scatter plot whose Pearson correlation coefficient is meant to correspond to the \(c_k\) value computed by TSUNAMI-IP.

tsunami_ip_utils.perturbations.cache_all_libraries(base_library, perturbation_factors, reset_cache=False, num_cores=6)[source]

Caches the base and perturbed cross section libraries for a given base library and perturbed library paths.

Parameters:
  • base_library (Path) – Path to the base cross section library.

  • perturbation_factors (Path) – Path to the cross section perturbation factors (used to generate the perturbed libraries).

  • reset_cache (bool) – Whether to reset the cache or not (default is False).

  • num_cores (int) – The number of cores to use for caching the perturbed libraries in parallel (default is half the number of cores available).

Return type:

None

Returns:

This function does not return a value and has no return type.

Notes

  • This function will cache the base cross section library and the perturbed cross section libraries in the user’s home directory under the .tsunami_ip_utils_cache directory. If the user wishes to reset the cache, they can do so by setting the reset_cache parameter to True in the cache_all_libraries() function.

  • The caches can be very large, so make sure that sufficient space is available. For example, caching SCALE’s 252-group ENDF-v7.1 library and all of the perturbed libraries currently available in SCALE (1000 samples) requires 48 GB of space, and for ENDF-v8.0, it requires 76 GB of space.

  • The time taken to cache the libraries can be significant (~5 hours on 6 cores, but this is hardware dependent), but when caching the libraries a progress bar will be displayed with a time estimate.

  • Note, if using num_cores greater than half the number of cores available on your system, you may experience excessive memory usage, so proceed with caution.

tsunami_ip_utils.readers module

class tsunami_ip_utils.readers.SdfReader(filename)[source]

Bases: object

A class for reading TSUNAMI-B Sentitivity Data Files (SDFs, i.e. .sdf files produced by TSUNAMI-3D monte carlo transport simulations).

The format for TSUNAMI-B SDF files is given here.

Notes

The SDF reader currently does not support TSUNAMI-A formatted SDF files (produced by deterministic transport simulations).

__init__(filename)[source]

Create a TSUNAMI-B SDF reader object from the given filename

Parameters:

filename (Union[str, Path]) – Path to the sdf file.

energy_boundaries: ndarray

Boundaries for the energy groups

sdf_data: List[dict]

List of dictionaries containing the sensitivity profiles and other derived/descriptive data. The dictionary keys are given by SDF_DATA_NAMES = ['isotope', 'reaction_type', 'zaid', 'reaction_mt', 'zone_number', 'zone_volume', 'energy_integrated_sensitivity', 'abs_sum_groupwise_sensitivities', 'sum_opposite_sign_groupwise_sensitivities', 'sensitivities', 'uncertainties'].

class tsunami_ip_utils.readers.RegionIntegratedSdfReader(filename)[source]

Bases: SdfReader

Reads region integrated TSUNAMI-B sensitivity data files produced by TSUNAMI-3D. Useful when the spatial dependence of sensitivty is not important.

__init__(filename)[source]

Create a TSUNAMI-B region integrated SDF reader object from the given filename

Parameters:

filename (Union[str, Path]) – Path to the sdf file.

Examples

>>> reader = RegionIntegratedSdfReader('tests/example_files/sphere_model_1.sdf')
>>> reader.sdf_data[0]
{'isotope': 'u-234', 'reaction_type': 'total', 'zaid': '92234', 'reaction_mt': '1', 'zone_number': 0, 'zone_volume': 0, 'energy_integrated_sensitivity': 0.008984201+/-0.0002456359, 'abs_sum_groupwise_sensitivities': 0.009319556, 'sum_opposite_sign_groupwise_sensitivities': -0.0001676775+/-4.959921e-05, 'sensitivities': array([0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -4.393147e-10+/-4.39204e-10, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -1.0871e-09+/-1.086821e-09, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, -8.666447e-10+/-8.664239e-10, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, -2.021382e-09+/-2.020862e-09, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -2.351273e-11+/-2.350687e-11, 1.707797e-08+/-1.707371e-08,
       -7.172835e-10+/-7.171047e-10, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -1.89505e-09+/-1.894573e-09, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, -5.624288e-09+/-5.622858e-09,
       -1.859026e-09+/-1.858552e-09, -3.250635e-12+/-3.249808e-12,
       -5.068914e-09+/-5.067634e-09, -4.871941e-09+/-4.87071e-09, 0.0+/-0,
       0.0+/-0, -6.871712e-09+/-4.952527e-09,
       -1.545212e-09+/-1.544828e-09, -6.40573e-10+/-6.404087e-10, 0.0+/-0,
       -4.935283e-09+/-2.901975e-09, -1.843172e-08+/-1.251576e-08,
       -6.71445e-09+/-6.71273e-09, -1.388548e-07+/-4.274043e-08,
       -4.241146e-08+/-1.762258e-08, -2.999543e-08+/-2.998778e-08,
       -1.520626e-07+/-5.006299e-07, -3.266883e-07+/-1.449028e-07,
       7.846369e-07+/-8.803142e-07, -1.519928e-07+/-1.07218e-07,
       -4.081559e-07+/-1.873273e-07, -3.434015e-08+/-1.027987e-06,
       -2.516069e-07+/-1.73851e-07, -8.333603e-07+/-2.853665e-07,
       1.439084e-06+/-1.77756e-06, -2.836471e-06+/-6.774013e-07,
       -7.692097e-07+/-2.280938e-07, -3.207971e-06+/-5.799841e-06,
       -8.805729e-06+/-3.719169e-06, -3.810922e-06+/-3.29137e-06,
       -1.671083e-05+/-5.398771e-06, -8.131938e-06+/-8.703075e-06,
       -1.213469e-05+/-6.712768e-06, 2.241294e-05+/-1.816346e-05,
       -3.46584e-05+/-2.47274e-05, -2.504294e-05+/-1.322211e-05,
       2.988232e-06+/-9.999848e-06, 9.248787e-06+/-2.051111e-05,
       8.657588e-06+/-2.534945e-05, -9.951473e-06+/-9.817888e-06,
       3.696361e-06+/-2.029254e-05, 8.702776e-06+/-1.343905e-05,
       1.237458e-05+/-3.052268e-05, 9.421495e-05+/-4.640119e-05,
       -3.918389e-05+/-3.697521e-05, 0.0001051884+/-6.277818e-05,
       0.0001998479+/-6.489579e-05, 0.0001497279+/-5.563772e-05,
       0.0001947749+/-5.449071e-05, 7.17222e-05+/-2.914425e-05,
       0.000134732+/-2.9629e-05, 0.0001109158+/-3.34169e-05,
       0.0001089335+/-2.818967e-05, 0.0003405998+/-4.422492e-05,
       0.0001581347+/-2.790862e-05, 0.0001519637+/-2.893234e-05,
       0.0002925294+/-4.266703e-05, 3.390917e-05+/-1.328072e-05,
       0.0003311925+/-4.068107e-05, 0.0003964351+/-3.973347e-05,
       0.0002032989+/-2.820316e-05, 4.749484e-05+/-1.391542e-05,
       7.308714e-05+/-1.919152e-05, 8.477883e-05+/-1.880586e-05,
       0.0003783144+/-3.910496e-05, 0.0002999691+/-3.663134e-05,
       0.0002810895+/-3.531906e-05, 0.0001691159+/-2.466477e-05,
       0.0001964518+/-2.750338e-05, 0.0001036065+/-1.975991e-05,
       7.118574e-05+/-1.79444e-05, 0.0003142769+/-3.554042e-05,
       0.0006935364+/-5.715178e-05, 0.0008185451+/-5.864059e-05,
       0.0001756991+/-2.74122e-05, 0.0005926753+/-4.821636e-05,
       0.001032059+/-6.463625e-05, 0.0001732703+/-2.667443e-05,
       0.0003230355+/-3.82033e-05, 0.0001058208+/-2.194266e-05,
       4.386201e-05+/-1.377494e-05, 2.231817e-05+/-7.640867e-06,
       3.150126e-06+/-3.827818e-06, 9.240951e-08+/-1.080839e-07, 0.0+/-0,
       0.0+/-0, 0.0+/-0], dtype=object)}
>>> reader.energy_boundaries
array([2.000e+07, 1.733e+07, 1.568e+07, 1.455e+07, 1.384e+07, 1.284e+07,
       1.000e+07, 8.187e+06, 6.434e+06, 4.800e+06, 4.304e+06, 3.000e+06,
       2.479e+06, 2.354e+06, 1.850e+06, 1.500e+06, 1.400e+06, 1.356e+06,
       1.317e+06, 1.250e+06, 1.200e+06, 1.100e+06, 1.010e+06, 9.200e+05,
       9.000e+05, 8.750e+05, 8.611e+05, 8.200e+05, 7.500e+05, 6.790e+05,
       6.700e+05, 6.000e+05, 5.730e+05, 5.500e+05, 4.920e+05, 4.700e+05,
       4.400e+05, 4.200e+05, 4.000e+05, 3.300e+05, 2.700e+05, 2.000e+05,
       1.490e+05, 1.283e+05, 1.000e+05, 8.500e+04, 8.200e+04, 7.500e+04,
       7.300e+04, 6.000e+04, 5.200e+04, 5.000e+04, 4.500e+04, 3.000e+04,
       2.000e+04, 1.700e+04, 1.300e+04, 9.500e+03, 8.030e+03, 5.700e+03,
       3.900e+03, 3.740e+03, 3.000e+03, 2.500e+03, 2.250e+03, 2.200e+03,
       1.800e+03, 1.550e+03, 1.500e+03, 1.150e+03, 9.500e+02, 6.830e+02,
       6.700e+02, 5.500e+02, 3.050e+02, 2.850e+02, 2.400e+02, 2.200e+02,
       2.095e+02, 2.074e+02, 2.020e+02, 1.930e+02, 1.915e+02, 1.885e+02,
       1.877e+02, 1.800e+02, 1.700e+02, 1.430e+02, 1.220e+02, 1.190e+02,
       1.175e+02, 1.160e+02, 1.130e+02, 1.080e+02, 1.050e+02, 1.012e+02,
       9.700e+01, 9.000e+01, 8.170e+01, 8.000e+01, 7.600e+01, 7.200e+01,
       6.750e+01, 6.500e+01, 6.300e+01, 6.100e+01, 5.800e+01, 5.340e+01,
       5.060e+01, 4.830e+01, 4.520e+01, 4.400e+01, 4.240e+01, 4.100e+01,
       3.960e+01, 3.910e+01, 3.800e+01, 3.763e+01, 3.727e+01, 3.713e+01,
       3.700e+01, 3.600e+01, 3.550e+01, 3.500e+01, 3.375e+01, 3.325e+01,
       3.175e+01, 3.125e+01, 3.000e+01, 2.750e+01, 2.500e+01, 2.250e+01,
       2.175e+01, 2.120e+01, 2.050e+01, 2.000e+01, 1.940e+01, 1.850e+01,
       1.700e+01, 1.600e+01, 1.440e+01, 1.290e+01, 1.190e+01, 1.150e+01,
       1.000e+01, 9.100e+00, 8.100e+00, 7.150e+00, 7.000e+00, 6.875e+00,
       6.750e+00, 6.500e+00, 6.250e+00, 6.000e+00, 5.400e+00, 5.000e+00,
       4.700e+00, 4.100e+00, 3.730e+00, 3.500e+00, 3.200e+00, 3.100e+00,
       3.000e+00, 2.970e+00, 2.870e+00, 2.770e+00, 2.670e+00, 2.570e+00,
       2.470e+00, 2.380e+00, 2.300e+00, 2.210e+00, 2.120e+00, 2.000e+00,
       1.940e+00, 1.860e+00, 1.770e+00, 1.680e+00, 1.590e+00, 1.500e+00,
       1.450e+00, 1.400e+00, 1.350e+00, 1.300e+00, 1.250e+00, 1.225e+00,
       1.200e+00, 1.175e+00, 1.150e+00, 1.140e+00, 1.130e+00, 1.120e+00,
       1.110e+00, 1.100e+00, 1.090e+00, 1.080e+00, 1.070e+00, 1.060e+00,
       1.050e+00, 1.040e+00, 1.030e+00, 1.020e+00, 1.010e+00, 1.000e+00,
       9.750e-01, 9.500e-01, 9.250e-01, 9.000e-01, 8.500e-01, 8.000e-01,
       7.500e-01, 7.000e-01, 6.500e-01, 6.250e-01, 6.000e-01, 5.500e-01,
       5.000e-01, 4.500e-01, 4.000e-01, 3.750e-01, 3.500e-01, 3.250e-01,
       3.000e-01, 2.750e-01, 2.500e-01, 2.250e-01, 2.000e-01, 1.750e-01,
       1.500e-01, 1.250e-01, 1.000e-01, 9.000e-02, 8.000e-02, 7.000e-02,
       6.000e-02, 5.000e-02, 4.000e-02, 3.000e-02, 2.530e-02, 1.000e-02,
       7.500e-03, 5.000e-03, 4.000e-03, 3.000e-03, 2.500e-03, 2.000e-03,
       1.500e-03, 1.200e-03, 1.000e-03, 7.500e-04, 5.000e-04, 1.000e-04,
       1.000e-05])
filename: Union[str, Path]

Path to the sdf file.

sdf_data: Union[List[dict], Dict[str, Dict[str, dict]]]

Collection of region integrated sdf profiles. Only includes SDF profiles with zone_number == 0 and zone_volume == 0 This can either be a list or a twice-nested dictionary (keyed by first by nuclide and then reaction type) of dictionaries keyed by SDF_DATA_NAMES = ['isotope', 'reaction_type', 'zaid', 'reaction_mt', 'zone_number', 'zone_volume', 'energy_integrated_sensitivity', 'abs_sum_groupwise_sensitivities', 'sum_opposite_sign_groupwise_sensitivities', 'sensitivities', 'uncertainties'].

convert_to_dict(key='names')[source]

Converts the sdf data into a dictionary keyed by nuclide-reaction pair or by ZAID and reaction MT.

Parameters:

key (str) – The key to use for the dictionary. Default is 'names' which uses the isotope name and reaction type. If 'numbers' is supplied instead then the ZAID and reaction MT are used.

Examples

>>> reader = RegionIntegratedSdfReader('tests/example_files/sphere_model_1.sdf')
>>> reader.convert_to_dict()
<tsunami_ip_utils.readers.RegionIntegratedSdfReader object from tests/example_files/sphere_model_1.sdf>
>>> reader.sdf_data.keys()
dict_keys(['u-234', 'u-235', 'u-238'])
>>> reader.sdf_data['u-234'].keys()
dict_keys(['total', 'elastic', "n,n'", 'n,2n', 'fission', 'capture', 'n,gamma', 'chi', 'nubar'])
>>> reader.sdf_data['u-234']['total']
{'isotope': 'u-234', 'reaction_type': 'total', 'zaid': '92234', 'reaction_mt': '1', 'zone_number': 0, 'zone_volume': 0, 'energy_integrated_sensitivity': 0.008984201+/-0.0002456359, 'abs_sum_groupwise_sensitivities': 0.009319556, 'sum_opposite_sign_groupwise_sensitivities': -0.0001676775+/-4.959921e-05, 'sensitivities': array([0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -4.393147e-10+/-4.39204e-10, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -1.0871e-09+/-1.086821e-09, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, -8.666447e-10+/-8.664239e-10, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, -2.021382e-09+/-2.020862e-09, 0.0+/-0, 0.0+/-0,
       0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -2.351273e-11+/-2.350687e-11, 1.707797e-08+/-1.707371e-08,
       -7.172835e-10+/-7.171047e-10, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       -1.89505e-09+/-1.894573e-09, 0.0+/-0, 0.0+/-0, 0.0+/-0, 0.0+/-0,
       0.0+/-0, -5.624288e-09+/-5.622858e-09,
       -1.859026e-09+/-1.858552e-09, -3.250635e-12+/-3.249808e-12,
       -5.068914e-09+/-5.067634e-09, -4.871941e-09+/-4.87071e-09, 0.0+/-0,
       0.0+/-0, -6.871712e-09+/-4.952527e-09,
       -1.545212e-09+/-1.544828e-09, -6.40573e-10+/-6.404087e-10, 0.0+/-0,
       -4.935283e-09+/-2.901975e-09, -1.843172e-08+/-1.251576e-08,
       -6.71445e-09+/-6.71273e-09, -1.388548e-07+/-4.274043e-08,
       -4.241146e-08+/-1.762258e-08, -2.999543e-08+/-2.998778e-08,
       -1.520626e-07+/-5.006299e-07, -3.266883e-07+/-1.449028e-07,
       7.846369e-07+/-8.803142e-07, -1.519928e-07+/-1.07218e-07,
       -4.081559e-07+/-1.873273e-07, -3.434015e-08+/-1.027987e-06,
       -2.516069e-07+/-1.73851e-07, -8.333603e-07+/-2.853665e-07,
       1.439084e-06+/-1.77756e-06, -2.836471e-06+/-6.774013e-07,
       -7.692097e-07+/-2.280938e-07, -3.207971e-06+/-5.799841e-06,
       -8.805729e-06+/-3.719169e-06, -3.810922e-06+/-3.29137e-06,
       -1.671083e-05+/-5.398771e-06, -8.131938e-06+/-8.703075e-06,
       -1.213469e-05+/-6.712768e-06, 2.241294e-05+/-1.816346e-05,
       -3.46584e-05+/-2.47274e-05, -2.504294e-05+/-1.322211e-05,
       2.988232e-06+/-9.999848e-06, 9.248787e-06+/-2.051111e-05,
       8.657588e-06+/-2.534945e-05, -9.951473e-06+/-9.817888e-06,
       3.696361e-06+/-2.029254e-05, 8.702776e-06+/-1.343905e-05,
       1.237458e-05+/-3.052268e-05, 9.421495e-05+/-4.640119e-05,
       -3.918389e-05+/-3.697521e-05, 0.0001051884+/-6.277818e-05,
       0.0001998479+/-6.489579e-05, 0.0001497279+/-5.563772e-05,
       0.0001947749+/-5.449071e-05, 7.17222e-05+/-2.914425e-05,
       0.000134732+/-2.9629e-05, 0.0001109158+/-3.34169e-05,
       0.0001089335+/-2.818967e-05, 0.0003405998+/-4.422492e-05,
       0.0001581347+/-2.790862e-05, 0.0001519637+/-2.893234e-05,
       0.0002925294+/-4.266703e-05, 3.390917e-05+/-1.328072e-05,
       0.0003311925+/-4.068107e-05, 0.0003964351+/-3.973347e-05,
       0.0002032989+/-2.820316e-05, 4.749484e-05+/-1.391542e-05,
       7.308714e-05+/-1.919152e-05, 8.477883e-05+/-1.880586e-05,
       0.0003783144+/-3.910496e-05, 0.0002999691+/-3.663134e-05,
       0.0002810895+/-3.531906e-05, 0.0001691159+/-2.466477e-05,
       0.0001964518+/-2.750338e-05, 0.0001036065+/-1.975991e-05,
       7.118574e-05+/-1.79444e-05, 0.0003142769+/-3.554042e-05,
       0.0006935364+/-5.715178e-05, 0.0008185451+/-5.864059e-05,
       0.0001756991+/-2.74122e-05, 0.0005926753+/-4.821636e-05,
       0.001032059+/-6.463625e-05, 0.0001732703+/-2.667443e-05,
       0.0003230355+/-3.82033e-05, 0.0001058208+/-2.194266e-05,
       4.386201e-05+/-1.377494e-05, 2.231817e-05+/-7.640867e-06,
       3.150126e-06+/-3.827818e-06, 9.240951e-08+/-1.080839e-07, 0.0+/-0,
       0.0+/-0, 0.0+/-0], dtype=object)}
energy_boundaries: ndarray

Boundaries for the energy groups

get_sensitivity_profiles(reaction_type='all')[source]

Returns the sensitivity profiles for each nuclide-reaction pair in a list in the order they appear in the sdf_data.

Parameters:

reaction_type (str) – The type of reaction to consider. Default is ‘all’ which considers all reactions.

Return type:

List[uarray]

Returns:

List of sensitivity profiles for each nuclide-reaction pair.

Notes

This method is useful for generating sensitivity profiles for each nuclide-reaction pair for a given system for e.g. computing \(E\) or \(C_k\) via \(C_k = \boldsymbol{S}_A^T \boldsymbol{C}_{\alpha, \alpha} \boldsymbol{S}_B\). However, it is important to note that the sensitivity vectors obtained this way are ordered according to the order in which the nuclide-reaction pairs appear in the sdf file.

Examples

>>> reader = RegionIntegratedSdfReader('tests/example_files/sphere_model_1.sdf')
>>> first_sensitivity_profile = reader.get_sensitivity_profiles()[0]
>>> first_sensitivity_profile[200:]
array([-2.504294e-05+/-1.322211e-05, 2.988232e-06+/-9.999848e-06,
       9.248787e-06+/-2.051111e-05, 8.657588e-06+/-2.534945e-05,
       -9.951473e-06+/-9.817888e-06, 3.696361e-06+/-2.029254e-05,
       8.702776e-06+/-1.343905e-05, 1.237458e-05+/-3.052268e-05,
       9.421495e-05+/-4.640119e-05, -3.918389e-05+/-3.697521e-05,
       0.0001051884+/-6.277818e-05, 0.0001998479+/-6.489579e-05,
       0.0001497279+/-5.563772e-05, 0.0001947749+/-5.449071e-05,
       7.17222e-05+/-2.914425e-05, 0.000134732+/-2.9629e-05,
       0.0001109158+/-3.34169e-05, 0.0001089335+/-2.818967e-05,
       0.0003405998+/-4.422492e-05, 0.0001581347+/-2.790862e-05,
       0.0001519637+/-2.893234e-05, 0.0002925294+/-4.266703e-05,
       3.390917e-05+/-1.328072e-05, 0.0003311925+/-4.068107e-05,
       0.0003964351+/-3.973347e-05, 0.0002032989+/-2.820316e-05,
       4.749484e-05+/-1.391542e-05, 7.308714e-05+/-1.919152e-05,
       8.477883e-05+/-1.880586e-05, 0.0003783144+/-3.910496e-05,
       0.0002999691+/-3.663134e-05, 0.0002810895+/-3.531906e-05,
       0.0001691159+/-2.466477e-05, 0.0001964518+/-2.750338e-05,
       0.0001036065+/-1.975991e-05, 7.118574e-05+/-1.79444e-05,
       0.0003142769+/-3.554042e-05, 0.0006935364+/-5.715178e-05,
       0.0008185451+/-5.864059e-05, 0.0001756991+/-2.74122e-05,
       0.0005926753+/-4.821636e-05, 0.001032059+/-6.463625e-05,
       0.0001732703+/-2.667443e-05, 0.0003230355+/-3.82033e-05,
       0.0001058208+/-2.194266e-05, 4.386201e-05+/-1.377494e-05,
       2.231817e-05+/-7.640867e-06, 3.150126e-06+/-3.827818e-06,
       9.240951e-08+/-1.080839e-07, 0.0+/-0, 0.0+/-0, 0.0+/-0],
      dtype=object)
tsunami_ip_utils.readers.read_covariance_matrix(filename)[source]
tsunami_ip_utils.readers.read_uncertainty_contributions_out(filename)[source]

Reads the output file from TSUNAMI-3D and returns the uncertainty contributions for each nuclide-reaction covariance.

Parameters:

filename (Union[str, Path]) – Path to the TSUNAMI-3D output file.

Return type:

Tuple[List[dict], List[dict]]

Returns:

  • isotope_totals

    List of dictionaries with keys: 'isotope' and 'contribution'.

  • isotope_reaction

    List of dictionaries with keys: 'isotope', 'reaction_type' and 'contribution'.

tsunami_ip_utils.readers.read_uncertainty_contributions_sdf(filenames)[source]

Reads the uncertainty contributions from a list of TSUNAMI-B SDF files and returns the contributions for each nuclide- reaction covariance by first running a TSUNAMI-IP calculation to generate the extended uncertainty edit.

Parameters:

filenames (List[Path]) – List of paths to the SDF files.

Return type:

Tuple[List[List[dict]], List[List[dict]]]

Returns:

  • isotope_totals

    List of nuclide-wise contributions. The outer list corresponds to the different SDF files, and has length: len(filenames). The inner list contains dictionaries with keys: 'isotope' and 'contribution'.

  • isotope_reaction

    List of nuclide-reaction-wise contributions. The outer list corresponds to the different SDF files, and has length: len(filenames). The inner list contains dictionaries with keys: 'isotope', 'reaction_type' and 'contribution'.

Examples

>>> filenames = [Path('tests/example_files/sphere_model_1.sdf'), Path('tests/example_files/sphere_model_2.sdf')]
>>> isotope_totals, isotope_reaction = read_uncertainty_contributions_sdf(filenames)
>>> isotope_totals[0][0]
{'isotope': 'u-235 - u-235', 'contribution': 1.1547475051845695+/-0.000896155210593727}
>>> isotope_reaction[0][0]
{'isotope': 'u-235 - u-235', 'reaction_type': 'n,gamma - n,gamma', 'contribution': 1.0304+/-0.00094372}
tsunami_ip_utils.readers.read_integral_indices(filename)[source]

Reads the output file from TSUNAMI-IP and returns the integral values for each application.

Parameters:

filename (Union[str, Path]) – Path to the TSUNAMI-IP output file.

Return type:

Dict[str, uarray]

Returns:

Integral matrices for each integral index type. The shape of the matrices are (num_applications, num_experiments). Keys are 'c_k', 'E_total', 'E_fission', 'E_capture', and 'E_scatter'.

Notes

Currently, this function and only reads \(c_k\), \(E_{\text{total}}\), \(E_{\text{fission}}\), \(E_{\text{capture}}\), and \(E_{\text{scatter}}\). If any of these are missing from the output file, the function will raise an error. To ensure these are present, please include at least

read parameters
    e c
    prtparts
    values
end parameters

in the TSUNAMI-IP input file.

Examples

>>> filename = Path('tests/example_files/tsunami_ip.out')
>>> integral_indices = read_integral_indices(filename)
>>> integral_indices['c_k']
array([[1.0+/-0.0024, 0.9986+/-0.0024, 0.9941+/-0.0025, 0.9892+/-0.0025,
        0.9729+/-0.0026, 0.9701+/-0.0026, 0.9582+/-0.0026,
        0.9032+/-0.0009, 0.896+/-0.0008, 0.8902+/-0.0008,
        0.8883+/-0.0007, 0.8353+/-0.0008],
       [0.9986+/-0.0024, 1.0+/-0.0025, 0.9976+/-0.0025, 0.9939+/-0.0026,
        0.9798+/-0.0026, 0.9772+/-0.0026, 0.9663+/-0.0026,
        0.9036+/-0.0009, 0.898+/-0.0008, 0.8939+/-0.0008,
        0.8924+/-0.0007, 0.8347+/-0.0008],
       [0.9941+/-0.0025, 0.9976+/-0.0025, 1.0+/-0.0026, 0.9991+/-0.0026,
        0.9909+/-0.0025, 0.9891+/-0.0026, 0.9811+/-0.0026,
        0.9192+/-0.0008, 0.9145+/-0.0008, 0.9115+/-0.0008,
        0.9102+/-0.0007, 0.8491+/-0.0008],
       [0.9892+/-0.0025, 0.9939+/-0.0026, 0.9991+/-0.0026, 1.0+/-0.0025,
        0.9956+/-0.0025, 0.9943+/-0.0025, 0.9882+/-0.0025,
        0.927+/-0.0008, 0.9229+/-0.0008, 0.9204+/-0.0007,
        0.9192+/-0.0007, 0.8564+/-0.0008],
       [0.9729+/-0.0026, 0.9798+/-0.0026, 0.9909+/-0.0025,
        0.9956+/-0.0025, 1.0+/-0.0024, 0.9998+/-0.0024, 0.9981+/-0.0024,
        0.9428+/-0.0008, 0.9397+/-0.0008, 0.9386+/-0.0007,
        0.9374+/-0.0007, 0.871+/-0.0008],
       [0.9701+/-0.0026, 0.9772+/-0.0026, 0.9891+/-0.0026,
        0.9943+/-0.0025, 0.9998+/-0.0024, 1.0+/-0.0024, 0.9988+/-0.0024,
        0.9437+/-0.0008, 0.9406+/-0.0008, 0.9396+/-0.0007,
        0.9385+/-0.0007, 0.8717+/-0.0008],
       [0.9582+/-0.0026, 0.9663+/-0.0026, 0.9811+/-0.0026,
        0.9882+/-0.0025, 0.9981+/-0.0024, 0.9988+/-0.0024, 1.0+/-0.0024,
        0.9482+/-0.0008, 0.9455+/-0.0008, 0.945+/-0.0007,
        0.9439+/-0.0007, 0.8761+/-0.0008],
       [0.9032+/-0.001, 0.9036+/-0.001, 0.9192+/-0.001, 0.927+/-0.001,
        0.9428+/-0.0009, 0.9437+/-0.0009, 0.9482+/-0.0009, 1.0+/-0.001,
        0.9988+/-0.0009, 0.996+/-0.0009, 0.9945+/-0.0009,
        0.9226+/-0.0008],
       [0.896+/-0.001, 0.898+/-0.001, 0.9145+/-0.0009, 0.9229+/-0.0009,
        0.9397+/-0.0009, 0.9406+/-0.0009, 0.9455+/-0.0009,
        0.9988+/-0.0009, 1.0+/-0.0009, 0.999+/-0.0009, 0.9982+/-0.0008,
        0.92+/-0.0008],
       [0.8902+/-0.0009, 0.8939+/-0.0009, 0.9115+/-0.0009,
        0.9204+/-0.0009, 0.9386+/-0.0008, 0.9396+/-0.0008,
        0.945+/-0.0008, 0.996+/-0.0009, 0.999+/-0.0009, 1.0+/-0.0008,
        0.9998+/-0.0008, 0.9169+/-0.0008],
       [0.8883+/-0.0009, 0.8924+/-0.0009, 0.9102+/-0.0009,
        0.9192+/-0.0009, 0.9374+/-0.0008, 0.9385+/-0.0008,
        0.9439+/-0.0008, 0.9945+/-0.0009, 0.9982+/-0.0008,
        0.9998+/-0.0008, 1.0+/-0.0008, 0.9157+/-0.0008],
       [0.8353+/-0.001, 0.8347+/-0.001, 0.8491+/-0.001, 0.8564+/-0.0009,
        0.871+/-0.0009, 0.8717+/-0.0009, 0.8761+/-0.0009,
        0.9226+/-0.0008, 0.92+/-0.0007, 0.9169+/-0.0007, 0.9157+/-0.0007,
        1.0+/-0.0011]], dtype=object)
>>> application_1_with_experiment_2_ck = integral_indices['c_k'][0, 1]
>>> application_1_with_experiment_2_ck
0.9986+/-0.0024
tsunami_ip_utils.readers.read_region_integrated_h5_sdf(filename)[source]

Reads all region integrated SDFs from a HDF5 (.h5) formatted TSUNAMI-B sdf file and returns a dictionary of the data

Parameters:

filename (Union[str, Path]) – Path to the .h5 SDF file (e.g. my_model.sdf.h5)

Return type:

Dict[str, uarray]

Returns:

Dictionary of the region integrated SDF data. The dictionary is twice-nested, and keyed first by nuclide, then by reaction type. The values are the sensitivity profiles with uncertainties.

Notes

.h5 formatted SDF files can be generated using the SCALE utility tao. Tao can convert a single .sdf file to a .h5 file using the command:

tao convert filename.sdf

or convert all .sdf files in a directory using:

tao convert *.sdf

tsunami_ip_utils.utils module

tsunami_ip_utils.utils.modify_sdf_names(sdf_paths, overwrite=True, output_directory=None)[source]

Takes in a list of paths to SDF files, and if the annotated name in the SDF file contains a space in it, it removes the space so that it can be properly parsed by the relevant readers.

Parameters:
  • sdf_paths (Union[List[str], List[Path]]) – List of paths to the SDF files.

  • overwrite (bool) – Whether to overwrite the original SDF files with the modified names. Default is True. If False, the modified SDF files are prefixed with 'modified_', unless an output directory is specified (if it is the same as the directory containing the original SDFs, this is equivalent to setting overwrite=True).

  • output_directory (Union[str, Path, None]) – The directory to save the modified SDF files. If None, the modified SDF files are saved in the same directory as the original SDF files.

Return type:

None

tsunami_ip_utils.xs module

This module contains the functions necessary for processing the binary SCALE multigroup cross section libraries (using the AMPX tools: charmin and tabasco) into python friendly dictionaries of numpy arrays.

tsunami_ip_utils.xs.read_multigroup_xs(multigroup_library_path, nuclide_zaid_reaction_dict, num_processes=12, return_available_nuclide_reactions=False)[source]

Function for reading a set of reactions from a given nuclide in a SCALE multigroup library.

Parameters:
  • multigroup_library_path (Path) – The path to the SCALE multigroup library file.

  • nuclide_zaid_reaction_dict (Dict[str, Dict[str, str]]) – A dictionary mapping nuclide ZAIDs to a list of reaction MTs to read.

  • num_processes (int) – The number of processes to use for reading the library. If None, the number of processes is set to the number of cores.

  • return_available_nuclide_reactions (bool) – If True, the available nuclide reactions (i.e. all of the nuclide reactions for which data exists in the given multigroup library) are returned as well.

Return type:

Union[Dict[str, Dict[str, ndarray]], Tuple[Dict[str, Dict[str, ndarray]], Dict[str, list]]]

Returns:

  • If return_available_nuclide_reactions is False, a dictionary containing the cross sections (as numpy arrays) for each nuclide-reaction pair. Keyed first by nuclide, then by reaction.

  • If return_available_nuclide_reactions is True, a tuple containing the above dictionary and a dictionary containing all available nuclide reactions is returned.

tsunami_ip_utils.xs.perturb_multigroup_xs_dump(filename, max_perturb_factor, overwrite=False, output_file=None)[source]

Perturb the cross section data in a SCALE multigroup cross section library fido text dump file. This is useful for generating examples for testing the SCALE reader functions which do not violate export control.

Parameters:
  • filename (Union[str, Path]) – The filename of the fido text dump file.

  • max_perturb_factor (float) – The maximum percentage by which to perturb the cross sections. Cross sections are perturbed by a random factor between 1 - max_perturb_factor and 1 + max_perturb_factor.

  • overwrite (bool) – Whether or not to overwrite the file with the perturbed data.

  • output_file (Union[str, Path, None]) – An output file to write the perturbed data to.

Return type:

Optional[List[str]]

Returns:

  • If True, the file is overwritten with the perturbed data and nothing is returned.

  • If False, the perturbed data is returned as a list of strings, which can be written to a file via with open('filename.txt', 'w') as f: f.writelines(perturbed_data).

Notes

  • This only applies to fido dumps of SCALE multigroup cross section libraries.

  • Fido dumps of SCALE multigroup cross section libraries can be generated using

tsunami_ip_utils.xs.get_scale_multigroup_structure(num_groups)[source]

Return the multigroup structure for a SCALE library with a given number of groups.

Parameters:

num_groups (int) – The number of groups in the SCALE library.

Returns:

A 2D numpy array with the first column as the group number and the second column as the energy (in eV). Note the energies are default sorted in ascending order.

Return type:

np.ndarray

SCALE Template Input Files

These are the SCALE input files that are used to interface with SCALE via scalerte. Each of the input file is meant to be read as a python Template string.

Multigroup SCALE XS Library Reader

This file is templated with the following arguments:

  • multigroup_library_path

    Path to the SCALE multigroup library (e.g. /home/example_user/codes/SCALE-6.3.1/data/scale.rev08.252groupcov7.1).

  • nuclide_zaid

    The ZAID of the nuclide to be extracted from the multigroup library (‘-1’ to select all nuclides).

  • reaction_mt

    The reaction MT to be extracted for the selected nuclide(s) (‘-1’ to select all reactions).

  • plot_option

    The charmin option for formatting the dump, e.g. ‘fido’.

  • output_file_path

    The path to direct the cross section dump to.

Which generates a SCALE input that uses the AMPX utilities tabasco and charmin to read the selected AMPX master library. Further details on valid options can be found in the AMPX manual.

 1=shell
 2cp $multigroup_library_path ft31f001
 3end
 4
 5=tabasco
 60$$$$ 31 32 e
 71$$$$ 1 e t
 82$$$$ $nuclide_zaid e
 93$$$$ $reaction_mt e t
10end
11
12=charmin
13input=32 output=33 double to $plot_option
14end
15
16=shell
17cp ft33f001 $output_file_path
18end

Uncertainty Edit Generator

This input file is used to generate the uncertainty edits (i.e. covariance-wise \(\Delta k/k\) contributions) for a given set of SDFs using TSUNAMI-IP. This file is templated with the following arguments:

  • filenames

    A multiline string containing the paths to the SDF files for which the uncertainty edits are to be generated.

  • first_file

    An arbitrary selected file to be chosen as an experiment in the TSUNAMI-IP input.

This uses the 'uncert_long' option to generate the uncertainty edits of all of the selected applications (which is just all of the files supplied).

 1=tsunami-ip
 2Uncertainty Contributions
 3
 4read parameters
 5    uncert
 6    uncert_long
 7end parameters
 8
 9read applications
10    $filenames
11end applications
12
13read experiments
14    $first_file
15end experiments
16
17end

Perturbed Library Generator

This input file is used for generating a randomly sampled/perturbed multigroup cross section library using the perturbation factors built-in to SCALE (this should be compared to the analogous feature in SAMPLER, which generates similar input files). This template takes the following arguments:

  • base_library

    Path to the base SCALE multigroup library.

  • perturbation_factors

    Path to the file containing the perturbation factors for the chosen base multigroup library (e.g. /home/example_user/codes/SCALE-6.3.1/data/perturb/252n.v7.1/Sample20).

  • sample_number

    The sample number to use. Note this must correspond with the chosen perturbation_factors or the library will not be generated correctly (e.g. the correct sample number, corresponding with the example perturbation factors is 20).

  • output

    Path to copy the generated perturbed library to.

 1=shell
 2  ln -sf $base_library ft89f001
 3  ln -sf $perturbation_factors ft87f001
 4end
 5=clarolplus
 6  in=89
 7  out=88
 8  var=87
 9  isvar=10
10  bond=yes
11  sam=$sample_number
12end
13=shell
14  mv ft88f001 perturbed_library
15  unlink ft87f001
16  unlink ft89f001
17  mv ft10f001 crawdadPerturbMGLib
18end
19
20=shell
21    cp perturbed_library $output
22end

Module contents