Skip to content

๐Ÿ“˜ API Reference

Welcome to the full API reference for the NN_fit package. Below youโ€™ll find organized documentation for all modules in the codebase, including classes, functions, and docstrings.


๐Ÿ”ง Core Fitting Modules

perform_fit_comb.py

perform_fit(pred, num_reps, range_alpha, range_beta, range_gamma, lr, wd, patience, x_alphas, fk_tables, binwidths, cov_matrix, extended_loss, activation_function, num_input_layers, num_output_layers, hidden_layers, x_vals, preproc, validation_split, max_epochs, max_chi_sq, lag_mult_pos, lag_mult_int, x_int)

Performs repeated training of neural networks to fit pseudo-data predictions using a physics-constrained loss function and neural PDF parameterization.

Each replica (num_reps) of the prediction is fitted using a randomly initialized neural network (based on PreprocessedMLP), and the corresponding predictions and losses are recorded. Optionally uses a validation split.

Parameters

pred : list of np.ndarray List of length num_reps, each containing the pseudo-data event predictions. num_reps : int Number of replicas (i.e., independent fits with random initialization). range_alpha : float Upper bound for uniform sampling of alpha preprocessing parameter. range_beta : float Upper bound for uniform sampling of beta preprocessing parameter. range_gamma : float Upper bound for uniform sampling of gamma preprocessing parameter. lr : float Learning rate for the Adam optimizer. wd : float Weight decay (L2 regularization) for the optimizer. patience : int Early stopping patience (currently unused but declared). x_alphas : torch.Tensor Input x-values used to evaluate PDF predictions for data loss. fk_tables : torch.Tensor FastKernel tables to convert PDFs into observable space. binwidths : torch.Tensor Widths of each bin used in rebinning the predictions. cov_matrix : np.ndarray Covariance matrix used for weighted loss calculation. extended_loss : bool If True, uses extended loss with constraints (e.g., normalization, positivity). activation_function : str Activation function used in the neural network (e.g., 'relu', 'tanh'). num_input_layers : int Number of input layers before hidden layers. num_output_layers : int Number of output layers after hidden layers. hidden_layers : list of int Number of neurons in each hidden layer. x_vals : np.ndarray x-values used to store final fitted PDFs. preproc : str Type of preprocessing function (e.g., 'powerlaw', 'exp') applied to PDFs. validation_split : float Fraction of data used for validation (between 0 and 1). max_epochs : int Maximum number of training epochs. max_chi_sq : float Maximum allowed chi-squared for a fit to be accepted. lag_mult_pos : float Lagrange multiplier for positivity constraint. lag_mult_int : float Lagrange multiplier for integral constraint. x_int : np.ndarray x-values used for computing integral constraints.

Returns

chi_squares : list of float Training loss (chi-squared) values saved periodically during training. N_event_pred : list of np.ndarray Predicted event yields after applying the FastKernel convolution. neutrino_pdfs : list of np.ndarray Final predicted PDFs (postprocessed) from successful fits. model : PreprocessedMLP Final trained model (from last accepted fit). chi_square_for_postfit : list of float Final chi-squared values for each accepted fit (for post-fit evaluation). train_indices : np.ndarray Indices used for training set in the last run (if validation was used). val_indices : np.ndarray Indices used for validation set in the last run (if validation was used). training_length : int Number of training steps run in the final (last) model.

Notes

  • Only models with loss < max_chi_sq are retained in the final output.
  • The PDFs are preprocessed using a parameterized function with random ฮฑ, ฮฒ, ฮณ values.
  • Assumes the model class PreprocessedMLP and loss class CustomLoss are defined externally.
  • Model and predictions use PyTorch; inputs must be tensors where appropriate.
  • Currently no explicit early stopping logic is implemented (but patience is reserved).
Source code in NN_fit/perform_fit_comb.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
def perform_fit(
    pred: List[np.ndarray],
    num_reps: int,
    range_alpha: float,
    range_beta: float,
    range_gamma: float,
    lr: float,
    wd: float,
    patience: int,
    x_alphas: torch.Tensor,
    fk_tables: torch.Tensor,
    binwidths: torch.Tensor,
    cov_matrix: np.ndarray,
    extended_loss: bool,
    activation_function: str,
    num_input_layers: int,
    num_output_layers: int,
    hidden_layers: List[int],
    x_vals: np.ndarray,
    preproc: str,
    validation_split: float,
    max_epochs: int,
    max_chi_sq: float,
    lag_mult_pos: float,
    lag_mult_int: float,
    x_int: np.ndarray,
) -> Tuple[
    List[float],  # chi_squares
    List[np.ndarray],  # N_event_pred
    List[np.ndarray],  # neutrino_pdfs
    PreprocessedMLP,  # model (last accepted fit)
    List[float],  # chi_square_for_postfit
    np.ndarray,  # train_indices
    np.ndarray,  # val_indices
    int,  # training_length
]:
    """
    Performs repeated training of neural networks to fit pseudo-data predictions using a
    physics-constrained loss function and neural PDF parameterization.

    Each replica (`num_reps`) of the prediction is fitted using a randomly initialized
    neural network (based on PreprocessedMLP), and the corresponding predictions and losses
    are recorded. Optionally uses a validation split.

    Parameters
    ----------
    pred : list of np.ndarray
        List of length `num_reps`, each containing the pseudo-data event predictions.
    num_reps : int
        Number of replicas (i.e., independent fits with random initialization).
    range_alpha : float
        Upper bound for uniform sampling of `alpha` preprocessing parameter.
    range_beta : float
        Upper bound for uniform sampling of `beta` preprocessing parameter.
    range_gamma : float
        Upper bound for uniform sampling of `gamma` preprocessing parameter.
    lr : float
        Learning rate for the Adam optimizer.
    wd : float
        Weight decay (L2 regularization) for the optimizer.
    patience : int
        Early stopping patience (currently unused but declared).
    x_alphas : torch.Tensor
        Input x-values used to evaluate PDF predictions for data loss.
    fk_tables : torch.Tensor
        FastKernel tables to convert PDFs into observable space.
    binwidths : torch.Tensor
        Widths of each bin used in rebinning the predictions.
    cov_matrix : np.ndarray
        Covariance matrix used for weighted loss calculation.
    extended_loss : bool
        If True, uses extended loss with constraints (e.g., normalization, positivity).
    activation_function : str
        Activation function used in the neural network (e.g., 'relu', 'tanh').
    num_input_layers : int
        Number of input layers before hidden layers.
    num_output_layers : int
        Number of output layers after hidden layers.
    hidden_layers : list of int
        Number of neurons in each hidden layer.
    x_vals : np.ndarray
        x-values used to store final fitted PDFs.
    preproc : str
        Type of preprocessing function (e.g., 'powerlaw', 'exp') applied to PDFs.
    validation_split : float
        Fraction of data used for validation (between 0 and 1).
    max_epochs : int
        Maximum number of training epochs.
    max_chi_sq : float
        Maximum allowed chi-squared for a fit to be accepted.
    lag_mult_pos : float
        Lagrange multiplier for positivity constraint.
    lag_mult_int : float
        Lagrange multiplier for integral constraint.
    x_int : np.ndarray
        x-values used for computing integral constraints.

    Returns
    -------
    chi_squares : list of float
        Training loss (chi-squared) values saved periodically during training.
    N_event_pred : list of np.ndarray
        Predicted event yields after applying the FastKernel convolution.
    neutrino_pdfs : list of np.ndarray
        Final predicted PDFs (postprocessed) from successful fits.
    model : PreprocessedMLP
        Final trained model (from last accepted fit).
    chi_square_for_postfit : list of float
        Final chi-squared values for each accepted fit (for post-fit evaluation).
    train_indices : np.ndarray
        Indices used for training set in the last run (if validation was used).
    val_indices : np.ndarray
        Indices used for validation set in the last run (if validation was used).
    training_length : int
        Number of training steps run in the final (last) model.

    Notes
    -----
    - Only models with `loss < max_chi_sq` are retained in the final output.
    - The PDFs are preprocessed using a parameterized function with random ฮฑ, ฮฒ, ฮณ values.
    - Assumes the model class `PreprocessedMLP` and loss class `CustomLoss` are defined externally.
    - Model and predictions use PyTorch; inputs must be tensors where appropriate.
    - Currently no explicit early stopping logic is implemented (but patience is reserved).
    """
    (
        neutrino_pdfs,
        N_event_pred,
        chi_squares,
        preproc_pdfs,
        nn_pdfs,
        chi_square_for_postfit,
        val_losses,
    ) = [], [], [], [], [], [], []
    x_vals = torch.tensor(x_vals, dtype=torch.float32).view(-1, 1)
    x_int = torch.tensor(x_int, dtype=torch.float32).view(-1, 1)

    for i in range(num_reps):
        training_length = 0
        counter = 0
        best_loss = 1e13  # initial loss
        alpha, beta, gamma = (
            np.random.rand() * range_alpha,
            np.random.rand() * range_beta,
            np.random.rand() * range_gamma,
        )

        model = PreprocessedMLP(
            alpha,
            beta,
            gamma,
            activation_function,
            hidden_layers,
            num_input_layers,
            num_output_layers,
            preproc,
        )

        criterion = CustomLoss(
            extended_loss,
            num_output_layers,
        )

        optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=wd)

        dataset_size = pred[i].shape[0]
        indices = np.arange(dataset_size)

        np.random.shuffle(indices)
        val_size = int(dataset_size * validation_split)

        train_indices, val_indices = indices[val_size:], indices[:val_size]

        if validation_split != 0:
            pred_train = pred[i][train_indices]
            cov_matrix_train = cov_matrix[train_indices][:, train_indices]
            cov_matrix_val = cov_matrix[val_indices][:, val_indices]
            pred_val = pred[i][val_indices]
        else:
            pred[i] = pred[i].squeeze()

        losses = []

        model.train()

        while counter < max_epochs:
            if max_epochs < training_length:
                break

            training_length += 1
            optimizer.zero_grad()
            y_pred = model(x_alphas)

            y_preds = torch.matmul(fk_tables, y_pred.flatten()) * binwidths.flatten()

            y_preds = y_preds.squeeze()

            y_int_mu = model(x_int)
            y_int_mub = y_int_mu

            if validation_split != 0.0:
                y_train = y_preds[train_indices]
                y_val = y_preds[val_indices]

                loss_val = criterion(
                    y_val,
                    pred_val,
                    cov_matrix_val,
                    y_pred,
                    y_int_mu,
                    y_int_mub,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )

                loss = criterion(
                    y_train,
                    pred_train,
                    cov_matrix_train,
                    y_pred,
                    y_int_mu,
                    y_int_mub,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )
            else:
                loss = criterion(
                    y_preds,
                    pred[i],
                    cov_matrix,
                    y_pred,
                    y_int_mu,
                    y_int_mub,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )

            loss.backward()

            if training_length % 500 == 0:
                chi_squares.append(loss.detach().numpy())
                if validation_split != 0.0:
                    val_losses.append(loss_val)

            losses.append(loss.detach().numpy())
            optimizer.step()

            if loss < best_loss:
                best_loss = loss
                counter = 0
            else:
                counter += 1

        if loss < max_chi_sq:
            f_nu = model(x_vals).detach().numpy().flatten()

            chi_square_for_postfit.append(loss.detach().numpy())

            preproc_pdfs.append(model.preproces(x_vals).detach().numpy().flatten())
            nn_pdf = model.neuralnet(x_vals)[:, 0].detach().numpy().flatten()
            nn_pdfs.append(nn_pdf)

            N_event_pred.append(y_preds.detach().numpy())

            neutrino_pdfs.append(f_nu)

    return (
        chi_squares,
        N_event_pred,
        neutrino_pdfs,
        model,
        chi_square_for_postfit,
        train_indices,
        val_indices,
        training_length,
        val_losses,
    )

perform_fit_nu_nub.py

perform_fit(pred, num_reps, range_alpha, range_beta, range_gamma, lr, wd, patience, x_alphas, fk_tables_mu, fk_tables_mub, binwidths_mu, binwidths_mub, cov_matrix, extended_loss, activation_function, num_input_layers, num_output_layers, hidden_layers, x_vals, preproc, validation_split, max_epochs, max_chi_sq, fit_faser_data, lag_mult_pos, lag_mult_int, x_int)

Performs repeated training of neural networks to fit pseudo-data predictions using a physics-constrained loss function and neural PDF parameterization.

Each replica (num_reps) of the prediction is fitted using a randomly initialized neural network (based on PreprocessedMLP), and the corresponding predictions and losses are recorded. Optionally uses a validation split.

Parameters

pred : list of np.ndarray List of length num_reps, each containing the pseudo-data event predictions. num_reps : int Number of replicas (i.e., independent fits with random initialization). range_alpha : float Upper bound for uniform sampling of alpha preprocessing parameter. range_beta : float Upper bound for uniform sampling of beta preprocessing parameter. range_gamma : float Upper bound for uniform sampling of gamma preprocessing parameter. lr : float Learning rate for the Adam optimizer. wd : float Weight decay (L2 regularization) for the optimizer. patience : int Early stopping patience (currently unused but declared). x_alphas : torch.Tensor Input x-values used to evaluate PDF predictions for data loss. fk_tables : torch.Tensor FastKernel tables to convert PDFs into observable space. binwidths : torch.Tensor Widths of each bin used in rebinning the predictions. cov_matrix : np.ndarray Covariance matrix used for weighted loss calculation. extended_loss : bool If True, uses extended loss with constraints (e.g., normalization, positivity). activation_function : str Activation function used in the neural network (e.g., 'relu', 'tanh'). num_input_layers : int Number of input layers before hidden layers. num_output_layers : int Number of output layers after hidden layers. hidden_layers : list of int Number of neurons in each hidden layer. x_vals : np.ndarray x-values used to store final fitted PDFs. preproc : str Type of preprocessing function (e.g., 'powerlaw', 'exp') applied to PDFs. validation_split : float Fraction of data used for validation (between 0 and 1). max_epochs : int Maximum number of training epochs. max_chi_sq : float Maximum allowed chi-squared for a fit to be accepted. lag_mult_pos : float Lagrange multiplier for positivity constraint. lag_mult_int : float Lagrange multiplier for integral constraint. x_int : np.ndarray x-values used for computing integral constraints.

Returns

chi_squares : list of float Training loss (chi-squared) values saved periodically during training. N_event_pred : list of np.ndarray Predicted event yields after applying the FastKernel convolution. neutrino_pdfs : list of np.ndarray Final predicted PDFs (postprocessed) from successful fits. model : PreprocessedMLP Final trained model (from last accepted fit). chi_square_for_postfit : list of float Final chi-squared values for each accepted fit (for post-fit evaluation). train_indices : np.ndarray Indices used for training set in the last run (if validation was used). val_indices : np.ndarray Indices used for validation set in the last run (if validation was used). training_length : int Number of training steps run in the final (last) model.

Notes

  • Only models with loss < max_chi_sq are retained in the final output.
  • The PDFs are preprocessed using a parameterized function with random ฮฑ, ฮฒ, ฮณ values.
  • Assumes the model class PreprocessedMLP and loss class CustomLoss are defined externally.
  • Model and predictions use PyTorch; inputs must be tensors where appropriate.
  • Currently no explicit early stopping logic is implemented (but patience is reserved).
Source code in NN_fit/perform_fit_nu_nub.py
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
def perform_fit(
    pred: List[np.ndarray],
    num_reps: int,
    range_alpha: float,
    range_beta: float,
    range_gamma: float,
    lr: float,
    wd: float,
    patience: int,
    x_alphas: torch.Tensor,
    fk_tables_mu: torch.Tensor,
    fk_tables_mub: torch.Tensor,
    binwidths_mu: torch.Tensor,
    binwidths_mub: torch.Tensor,
    cov_matrix: np.ndarray,
    extended_loss: bool,
    activation_function: str,
    num_input_layers: int,
    num_output_layers: int,
    hidden_layers: List[int],
    x_vals: np.ndarray,
    preproc: str,
    validation_split: float,
    max_epochs: int,
    max_chi_sq: float,
    fit_faser_data: bool,
    lag_mult_pos: float,
    lag_mult_int: float,
    x_int: np.ndarray,
) -> Tuple[
    List[float],  # chi_squares
    List[np.ndarray],  # N_event_pred_mu
    List[np.ndarray],  # N_event_pred_mub
    List[np.ndarray],  # neutrino_pdfs_mu
    List[np.ndarray],  # neutrino_pdfs_mub
    PreprocessedMLP,  # model (last accepted fit)
    List[float],  # chi_square_for_postfit
    np.ndarray,  # train_indices
    np.ndarray,  # val_indices
    int,  # training_length
]:
    """
    Performs repeated training of neural networks to fit pseudo-data predictions using a
    physics-constrained loss function and neural PDF parameterization.

    Each replica (`num_reps`) of the prediction is fitted using a randomly initialized
    neural network (based on PreprocessedMLP), and the corresponding predictions and losses
    are recorded. Optionally uses a validation split.

    Parameters
    ----------
    pred : list of np.ndarray
        List of length `num_reps`, each containing the pseudo-data event predictions.
    num_reps : int
        Number of replicas (i.e., independent fits with random initialization).
    range_alpha : float
        Upper bound for uniform sampling of `alpha` preprocessing parameter.
    range_beta : float
        Upper bound for uniform sampling of `beta` preprocessing parameter.
    range_gamma : float
        Upper bound for uniform sampling of `gamma` preprocessing parameter.
    lr : float
        Learning rate for the Adam optimizer.
    wd : float
        Weight decay (L2 regularization) for the optimizer.
    patience : int
        Early stopping patience (currently unused but declared).
    x_alphas : torch.Tensor
        Input x-values used to evaluate PDF predictions for data loss.
    fk_tables : torch.Tensor
        FastKernel tables to convert PDFs into observable space.
    binwidths : torch.Tensor
        Widths of each bin used in rebinning the predictions.
    cov_matrix : np.ndarray
        Covariance matrix used for weighted loss calculation.
    extended_loss : bool
        If True, uses extended loss with constraints (e.g., normalization, positivity).
    activation_function : str
        Activation function used in the neural network (e.g., 'relu', 'tanh').
    num_input_layers : int
        Number of input layers before hidden layers.
    num_output_layers : int
        Number of output layers after hidden layers.
    hidden_layers : list of int
        Number of neurons in each hidden layer.
    x_vals : np.ndarray
        x-values used to store final fitted PDFs.
    preproc : str
        Type of preprocessing function (e.g., 'powerlaw', 'exp') applied to PDFs.
    validation_split : float
        Fraction of data used for validation (between 0 and 1).
    max_epochs : int
        Maximum number of training epochs.
    max_chi_sq : float
        Maximum allowed chi-squared for a fit to be accepted.
    lag_mult_pos : float
        Lagrange multiplier for positivity constraint.
    lag_mult_int : float
        Lagrange multiplier for integral constraint.
    x_int : np.ndarray
        x-values used for computing integral constraints.

    Returns
    -------
    chi_squares : list of float
        Training loss (chi-squared) values saved periodically during training.
    N_event_pred : list of np.ndarray
        Predicted event yields after applying the FastKernel convolution.
    neutrino_pdfs : list of np.ndarray
        Final predicted PDFs (postprocessed) from successful fits.
    model : PreprocessedMLP
        Final trained model (from last accepted fit).
    chi_square_for_postfit : list of float
        Final chi-squared values for each accepted fit (for post-fit evaluation).
    train_indices : np.ndarray
        Indices used for training set in the last run (if validation was used).
    val_indices : np.ndarray
        Indices used for validation set in the last run (if validation was used).
    training_length : int
        Number of training steps run in the final (last) model.

    Notes
    -----
    - Only models with `loss < max_chi_sq` are retained in the final output.
    - The PDFs are preprocessed using a parameterized function with random ฮฑ, ฮฒ, ฮณ values.
    - Assumes the model class `PreprocessedMLP` and loss class `CustomLoss` are defined externally.
    - Model and predictions use PyTorch; inputs must be tensors where appropriate.
    - Currently no explicit early stopping logic is implemented (but patience is reserved).
    """
    (
        neutrino_pdfs_mu,
        neutrino_pdfs_mub,
        N_event_pred_mu,
        N_event_pred_mub,
        chi_squares,
        preproc_pdfs,
        nn_pdfs,
        chi_square_for_postfit,
        val_losses,
    ) = (
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        [],
    )
    x_vals = torch.tensor(x_vals, dtype=torch.float32).view(-1, 1)
    x_int = torch.tensor(x_int, dtype=torch.float32).view(-1, 1)
    training_length = 0
    for i in range(num_reps):
        alpha, beta, gamma = (
            np.random.rand() * range_alpha,
            np.random.rand() * range_beta,
            np.random.rand() * range_gamma,
        )

        model = PreprocessedMLP(
            alpha,
            beta,
            gamma,
            activation_function,
            hidden_layers,
            num_input_layers,
            num_output_layers,
            preproc=preproc,
        )

        criterion = CustomLoss(
            extended_loss,
            num_output_layers,
        )

        optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=wd)

        dataset_size = pred[i].shape[0]
        indices = np.arange(dataset_size)
        # np.random.seed(seed)
        np.random.shuffle(indices)
        val_size = int(dataset_size * validation_split)

        train_indices, val_indices = indices[val_size:], indices[:val_size]

        if validation_split != 0:
            pred_train = pred[i][train_indices]
            cov_matrix_train = cov_matrix[train_indices][:, train_indices]
            cov_matrix_val = cov_matrix[val_indices][:, val_indices]
            pred_val = pred[i][val_indices]
        else:
            pred[i] = pred[i].squeeze()

        losses = []

        model.train()
        best_loss = 1e13
        counter = 0

        while counter < patience:
            if max_epochs < training_length:
                break
            training_length += 1
            optimizer.zero_grad()
            y_pred = model(x_alphas)

            y_pred_mu = (
                torch.matmul(fk_tables_mu, y_pred[:, 0]) * binwidths_mu.flatten()
            )

            y_pred_mub = (
                torch.matmul(fk_tables_mub, y_pred[:, 1]) * binwidths_mub.flatten()
            )

            y_pred_mu = y_pred_mu.squeeze()
            y_pred_mub = y_pred_mub.squeeze()

            if fit_faser_data:
                y_pred_mu[-1] = y_pred_mu[-1] + y_pred_mub[-1]
                y_pred_mub = y_pred_mub[:-1]

                y_pred_mub = torch.flip(y_pred_mub, dims=[0])

            y_preds = torch.hstack((y_pred_mu, y_pred_mub))
            # y_preds = y_pred_mu + y_pred_mub  # torch hstack
            # y_preds = y_pred_mu

            y_int_mu = model(x_int)[:, 0]
            y_int_mub = model(x_int)[:, 1]
            y_pred = model(x_alphas)
            if validation_split != 0.0:
                y_train = y_preds[train_indices]
                y_val = y_preds[val_indices]

                loss_val = criterion(
                    y_val,
                    pred_val,
                    cov_matrix_val,
                    y_pred,
                    y_int_mu,
                    y_int_mub,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )

                loss = criterion(
                    y_train,
                    pred_train,
                    cov_matrix_train,
                    y_pred,
                    y_int_mu,
                    y_int_mub,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )
            else:
                loss = criterion(
                    y_preds,
                    pred[i],
                    cov_matrix,
                    y_pred,
                    y_int_mu,
                    y_int_mub,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )

            loss.backward()

            if training_length % 500 == 0:
                chi_squares.append(loss.detach().numpy())
                if validation_split != 0.0:
                    val_losses.append(loss_val)

            losses.append(loss.detach().numpy())
            optimizer.step()

            if loss < best_loss:
                best_loss = loss
                counter = 0
            else:
                counter += 1

        if loss < max_chi_sq:
            f_nu_mub = model(x_vals)[:, 1].detach().numpy().flatten()
            f_nu_mu = model(x_vals)[:, 0].detach().numpy().flatten()

            chi_square_for_postfit.append(loss.detach().numpy())

            preproc_pdfs.append(model.preproces(x_vals).detach().numpy().flatten())
            nn_pdf = model.neuralnet(x_vals)[:, 0].detach().numpy().flatten()
            nn_pdfs.append(nn_pdf)

            N_event_pred_mu.append(y_pred_mu.detach().numpy())
            N_event_pred_mub.append(y_pred_mub.detach().numpy())

            neutrino_pdfs_mu.append(f_nu_mu)
            neutrino_pdfs_mub.append(f_nu_mub)
    return (
        chi_squares,
        N_event_pred_mu,
        N_event_pred_mub,
        neutrino_pdfs_mu,
        neutrino_pdfs_mub,
        model,
        chi_square_for_postfit,
        train_indices,
        val_indices,
        training_length,
        val_losses,
    )

execute_fit.py

execute_postfit.py

postfit_execution(postfit_criteria, validation_split, data, cov_matrix, num_output_layers, chi_square_for_postfit, neutrino_pdfs_mu, neutrino_pdfs_mub, neutrino_pdfs, postfit_measures, train_indices, val_indices, level1, N_event_pred, pred, dir_for_data, filename_postfit, diff_lev_1, fit_level, x_alphas, pdf, pdf_set, particle_id_nu, particle_id_nub, lr, wd, max_epochs, patience, chi_squares, neutrino_pdf_fit_name_lhapdf, x_vals, produce_plot, training_lengths, stat_error, sys_error, low_bin, high_bin, N_event_pred_nu, N_event_pred_nub, low_bin_mu, high_bin_mu, low_bin_mub, high_bin_mub, val_losses, lhapdf_path)

Execute post-fit processing after a PDF (Parton Distribution Function) neural fit.

This function performs various operations following a neural network-based PDF fit: - Applies post-fit criteria to PDFs and predictions. - Calculates post-fit measures (e.g., delta chi-squared, phi, bias-to-variance). - Logs results and configuration to file. - Writes PDF replicas and uncertainties to LHAPDF-compatible grid files. - Optionally generates plots of the results.

Parameters:

Name Type Description Default
postfit_criteria bool

Whether to apply post-fit criteria to the predictions and PDFs.

required
validation_split float

Fraction of data used for validation. If 0, no validation is used.

required
data ndarray

Measured experimental data.

required
cov_matrix ndarray

Covariance matrix of the data.

required
num_output_layers int

Number of neural network outputs (1 for single PDF, 2 for neutrino/antineutrino PDFs).

required
chi_square_for_postfit ndarray

Chi-square values for the post-fit predictions.

required
neutrino_pdfs_mu ndarray

Neutrino PDFs for muon neutrino (only if num_output_layers==2).

required
neutrino_pdfs_mub ndarray

Neutrino PDFs for anti-muon neutrino (only if num_output_layers==2).

required
neutrino_pdfs ndarray

Neutrino PDFs for the single-output model.

required
postfit_measures bool

Whether to compute post-fit performance metrics.

required
train_indices ndarray

Training sample indices.

required
val_indices ndarray

Validation sample indices.

required
level1 ndarray

Level-1 shifts or corrections.

required
N_event_pred ndarray

Predicted number of events from the model.

required
pred ndarray

Model predictions.

required
dir_for_data str

Directory to save results and intermediate files.

required
filename_postfit str

Name of the post-fit report file.

required
diff_lev_1 str / int

Identifier for the level-1 difference, used in LHAPDF naming.

required
fit_level int

Level of fit used; affects which postfit measures are computed.

required
x_alphas Tensor

Neural network output alphas (PDF coefficients).

required
pdf object

Reference PDF used during fitting.

required
pdf_set str

Name of the PDF set used as baseline.

required
particle_id_nu int

PDG ID for the neutrino used.

required
particle_id_nub int

PDG ID for the anti-neutrino used.

required
lr float

Learning rate used in the training.

required
wd float

Weight decay used during training.

required
max_epochs int

Maximum number of training epochs.

required
patience int

Early stopping patience.

required
chi_squares ndarray

Chi-square values for the fit.

required
neutrino_pdf_fit_name_lhapdf str

Name to use for the LHAPDF set.

required
x_vals ndarray

X-values (momentum fraction) for PDF grids.

required
produce_plot bool

Whether to produce summary plots of fit results.

required
training_lengths ndarray

Number of epochs run for each replica.

required
stat_error ndarray

Statistical error on the data.

required
sys_error ndarray

Systematic error on the data.

required
low_bin int

Lower bound for binning (single PDF).

required
high_bin int

Upper bound for binning (single PDF).

required
N_event_pred_nu ndarray

Event predictions for muon neutrino (two-output case).

required
N_event_pred_nub ndarray

Event predictions for anti-muon neutrino (two-output case).

required
low_bin_mu int

Lower bin index for muon neutrino.

required
high_bin_mu int

Upper bin index for muon neutrino.

required
low_bin_mub int

Lower bin index for anti-muon neutrino.

required
high_bin_mub int

Upper bin index for anti-muon neutrino.

required
lhapdf_path str

Path to LHAPDF neutrino PDFs

required
Outputs
  • Writes various .txt files with statistical and fit information.
  • Creates LHAPDF-compatible grid files with the fitted PDFs.
  • Optionally generates plots summarizing the fit quality and predictions.
Notes
  • Assumes presence of Postfit, Measures, and LHAPDF writing utilities.
  • Assumes external plotting utilities (plot_comb_pdf_cl, plot_nu_nub_cl) are available.
  • Handles both single-output (combined neutrino) and two-output (neutrino/antineutrino) models.
Source code in NN_fit/execute_postfit.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
def postfit_execution(
    postfit_criteria: bool,
    validation_split: float,
    data: np.ndarray,
    cov_matrix: np.ndarray,
    num_output_layers: int,
    chi_square_for_postfit: np.ndarray,
    neutrino_pdfs_mu: Optional[np.ndarray],
    neutrino_pdfs_mub: Optional[np.ndarray],
    neutrino_pdfs: Optional[np.ndarray],
    postfit_measures: bool,
    train_indices: np.ndarray,
    val_indices: np.ndarray,
    level1: np.ndarray,
    N_event_pred: np.ndarray,
    pred: np.ndarray,
    dir_for_data: str,
    filename_postfit: str,
    diff_lev_1: Union[str, int],
    fit_level: int,
    x_alphas: torch.Tensor,
    pdf: object,
    pdf_set: str,
    particle_id_nu: int,
    particle_id_nub: int,
    lr: float,
    wd: float,
    max_epochs: int,
    patience: int,
    chi_squares: np.ndarray,
    neutrino_pdf_fit_name_lhapdf: str,
    x_vals: np.ndarray,
    produce_plot: bool,
    training_lengths: np.ndarray,
    stat_error: np.ndarray,
    sys_error: np.ndarray,
    low_bin: int,
    high_bin: int,
    N_event_pred_nu: Optional[np.ndarray],
    N_event_pred_nub: Optional[np.ndarray],
    low_bin_mu: Optional[int],
    high_bin_mu: Optional[int],
    low_bin_mub: Optional[int],
    high_bin_mub: Optional[int],
    val_losses: list,
    lhapdf_path: str,
):
    """
    Execute post-fit processing after a PDF (Parton Distribution Function) neural fit.

    This function performs various operations following a neural network-based PDF fit:
    - Applies post-fit criteria to PDFs and predictions.
    - Calculates post-fit measures (e.g., delta chi-squared, phi, bias-to-variance).
    - Logs results and configuration to file.
    - Writes PDF replicas and uncertainties to LHAPDF-compatible grid files.
    - Optionally generates plots of the results.

    Parameters:
        postfit_criteria (bool): Whether to apply post-fit criteria to the predictions and PDFs.
        validation_split (float): Fraction of data used for validation. If 0, no validation is used.
        data (np.ndarray): Measured experimental data.
        cov_matrix (np.ndarray): Covariance matrix of the data.
        num_output_layers (int): Number of neural network outputs (1 for single PDF, 2 for neutrino/antineutrino PDFs).
        chi_square_for_postfit (np.ndarray): Chi-square values for the post-fit predictions.
        neutrino_pdfs_mu (np.ndarray): Neutrino PDFs for muon neutrino (only if `num_output_layers==2`).
        neutrino_pdfs_mub (np.ndarray): Neutrino PDFs for anti-muon neutrino (only if `num_output_layers==2`).
        neutrino_pdfs (np.ndarray): Neutrino PDFs for the single-output model.
        postfit_measures (bool): Whether to compute post-fit performance metrics.
        train_indices (np.ndarray): Training sample indices.
        val_indices (np.ndarray): Validation sample indices.
        level1 (np.ndarray): Level-1 shifts or corrections.
        N_event_pred (np.ndarray): Predicted number of events from the model.
        pred (np.ndarray): Model predictions.
        dir_for_data (str): Directory to save results and intermediate files.
        filename_postfit (str): Name of the post-fit report file.
        diff_lev_1 (str/int): Identifier for the level-1 difference, used in LHAPDF naming.
        fit_level (int): Level of fit used; affects which postfit measures are computed.
        x_alphas (torch.Tensor): Neural network output alphas (PDF coefficients).
        pdf (object): Reference PDF used during fitting.
        pdf_set (str): Name of the PDF set used as baseline.
        particle_id_nu (int): PDG ID for the neutrino used.
        particle_id_nub (int): PDG ID for the anti-neutrino used.
        lr (float): Learning rate used in the training.
        wd (float): Weight decay used during training.
        max_epochs (int): Maximum number of training epochs.
        patience (int): Early stopping patience.
        chi_squares (np.ndarray): Chi-square values for the fit.
        neutrino_pdf_fit_name_lhapdf (str): Name to use for the LHAPDF set.
        x_vals (np.ndarray): X-values (momentum fraction) for PDF grids.
        produce_plot (bool): Whether to produce summary plots of fit results.
        training_lengths (np.ndarray): Number of epochs run for each replica.
        stat_error (np.ndarray): Statistical error on the data.
        sys_error (np.ndarray): Systematic error on the data.
        low_bin (int): Lower bound for binning (single PDF).
        high_bin (int): Upper bound for binning (single PDF).
        N_event_pred_nu (np.ndarray): Event predictions for muon neutrino (two-output case).
        N_event_pred_nub (np.ndarray): Event predictions for anti-muon neutrino (two-output case).
        low_bin_mu (int): Lower bin index for muon neutrino.
        high_bin_mu (int): Upper bin index for muon neutrino.
        low_bin_mub (int): Lower bin index for anti-muon neutrino.
        high_bin_mub (int): Upper bin index for anti-muon neutrino.
        lhapdf_path (str): Path to LHAPDF neutrino PDFs

    Outputs:
        - Writes various `.txt` files with statistical and fit information.
        - Creates LHAPDF-compatible grid files with the fitted PDFs.
        - Optionally generates plots summarizing the fit quality and predictions.

    Notes:
        - Assumes presence of `Postfit`, `Measures`, and LHAPDF writing utilities.
        - Assumes external plotting utilities (`plot_comb_pdf_cl`, `plot_nu_nub_cl`) are available.
        - Handles both single-output (combined neutrino) and two-output (neutrino/antineutrino) models.
    """
    if postfit_criteria:
        train_indices = train_indices.reshape(1, -1)
        val_indices = val_indices.reshape(1, -1)

        level1 = level1[0]

        if validation_split != 0.0:
            train_indices = train_indices[0]
            val_indices = val_indices[0]
            train_indices = train_indices.astype(int)
            val_indices = val_indices.astype(int)

            N_event_pred_train = N_event_pred[:, train_indices]
            pred_train = pred[:, train_indices]

            N_event_pred_val = N_event_pred[:, val_indices]
            data_val = data[val_indices]
            pred_val = pred[:, val_indices]

            level1_val = level1[val_indices]

            val_indices = torch.tensor(val_indices)

            cov_matrix_val = cov_matrix[val_indices][:, val_indices]

        if num_output_layers == 1:

            def compute_postfit_criteria(neutrino_pdfs, N_event_pred, pred):
                closure_fit = Postfit()
                neutrino_pdfs, N_event_pred, pred = closure_fit.apply_postfit_criteria(
                    chi_square_for_postfit, N_event_pred, neutrino_pdfs, pred
                )
                return (neutrino_pdfs, N_event_pred, pred)

            if postfit_criteria and validation_split != 0.0:
                neutrino_pdfs, N_event_pred_train, pred_train = (
                    compute_postfit_criteria(
                        neutrino_pdfs, N_event_pred_train, pred_train
                    )
                )
            if postfit_criteria and validation_split == 0:
                neutrino_pdfs, N_event_pred, pred = compute_postfit_criteria(
                    neutrino_pdfs, N_event_pred, pred
                )
        if num_output_layers == 2:

            def compute_postfit_criteria(
                neutrino_pdfs_mu, neutrino_pdfs_mub, N_event_pred, pred
            ):
                # if postfit_criteria:
                closure_fit = Postfit()
                neutrino_pdfs_mu, _, _ = closure_fit.apply_postfit_criteria(
                    chi_square_for_postfit, N_event_pred, neutrino_pdfs_mu, pred
                )
                neutrino_pdfs_mub, N_event_pred, pred = (
                    closure_fit.apply_postfit_criteria(
                        chi_square_for_postfit, N_event_pred, neutrino_pdfs_mub, pred
                    )
                )

            if postfit_criteria and validation_split != 0.0:
                compute_postfit_criteria(
                    neutrino_pdfs_mu, neutrino_pdfs_mub, N_event_pred_train, pred_train
                )
            if postfit_criteria and validation_split == 0:
                compute_postfit_criteria(
                    neutrino_pdfs_mu, neutrino_pdfs_mub, N_event_pred, pred
                )

    if postfit_measures:
        with open(f"{dir_for_data}/{filename_postfit}", "w") as file:
            file.write(f"level 1 shift {diff_lev_1}:\n")
            file.write("postfit report faser sim fit:\n")
            file.write("100 replicas:\n")

        def compute_postfit_measures(cov_matrix, N_event_pred, data, level1, pred):
            compute_postfit = Measures(cov_matrix, pdf, N_event_pred)
            if fit_level != 0:
                delta_chi = compute_postfit.compute_delta_chi(
                    data,
                    N_event_pred,
                    level1,
                    x_alphas.detach().numpy().squeeze(),
                )

                with open(f"{dir_for_data}/{filename_postfit}", "a") as file:
                    file.write(f"delta chi^2 = {delta_chi}:\n")

                if num_output_layers == 1:
                    accuracy = compute_postfit.compute_accuracy(
                        x_alphas.detach().numpy().flatten(),
                        neutrino_pdfs,
                        pdf,
                        1,
                        pdf_set,
                        particle_id_nu,
                    )

                    with open(f"{dir_for_data}/{filename_postfit}", "a") as file:
                        file.write(f"accuracy = {accuracy}:\n")
                if num_output_layers == 2:
                    accuracy_nu = compute_postfit.compute_accuracy(
                        x_alphas.detach().numpy().flatten(),
                        neutrino_pdfs_mu,
                        pdf,
                        1,
                        pdf_set,
                        particle_id_nu,
                    )

                    accuracy_nub = compute_postfit.compute_accuracy(
                        x_alphas.detach().numpy().flatten(),
                        neutrino_pdfs_mub,
                        pdf,
                        1,
                        pdf_set,
                        particle_id_nub,
                    )

                    with open(f"{dir_for_data}/{filename_postfit}", "a") as file:
                        file.write(f"accuracy nu = {accuracy_nu}:\n")
                        file.write(f"accuracy nub = {accuracy_nub}:\n")

            phi = compute_postfit.compute_phi(data, chi_square_for_postfit)

            with open(f"{dir_for_data}/{filename_postfit}", "a") as file:
                file.write(f"phi = {phi}:\n")

            if fit_level == 2:
                bias_to_var = compute_postfit.compute_bias_to_variance(
                    data, pred, N_event_pred, len(N_event_pred)
                )

                with open(f"{dir_for_data}/{filename_postfit}", "a") as file:
                    file.write(f"bias_to_var = {bias_to_var}:\n")

        if postfit_measures and validation_split != 0.0:
            compute_postfit_measures(
                cov_matrix_val, N_event_pred_val, data_val, level1_val, pred_val
            )
        if postfit_measures and validation_split == 0.0:
            compute_postfit_measures(cov_matrix, N_event_pred, data, level1, pred)

        with open(f"{dir_for_data}/{filename_postfit}", "a") as file:
            file.write(f"mean chi^2 = {np.mean(chi_square_for_postfit)}:\n")
            file.write(f"average training length = {np.mean(training_lengths)}:\n")
            file.write("settings used:\n")
            file.write(f"learning rate = {lr}:\n")
            file.write(f"weigth decay = {wd}:\n")
            file.write(f"max training lenght = {max_epochs}:\n")
            file.write(f"patience = {patience}:\n")

    with open(f"{dir_for_data}/chi_square.txt", "w") as f:
        np.savetxt(f, chi_squares, delimiter=",")

    with open(f"{dir_for_data}/chi_squares_for_postfit.txt", "w") as f:
        np.savetxt(f, chi_square_for_postfit, delimiter=",")

    with open(f"{dir_for_data}/events.txt", "w") as f:
        np.savetxt(f, N_event_pred, delimiter=",")

    os.makedirs(
        f"{lhapdf_path}/{neutrino_pdf_fit_name_lhapdf}_{diff_lev_1}",
        exist_ok=True,
    )
    template_path = f"{lhapdf_path}/template_.info"
    path = f"{lhapdf_path}/{neutrino_pdf_fit_name_lhapdf}_{diff_lev_1}/{neutrino_pdf_fit_name_lhapdf}.info"
    set_index = int(np.random.rand() * 1e7)
    pdf_dict_central = {}
    pdf_dict_error = {}

    if num_output_layers == 1:
        customize_info_file(template_path, path, set_index, f"{particle_id_nu}", 2)
        mean_pdf = np.mean(neutrino_pdfs, axis=0)
        std_pdf = np.std(neutrino_pdfs, axis=0)
        path = f"{lhapdf_path}/{neutrino_pdf_fit_name_lhapdf}_{diff_lev_1}/{neutrino_pdf_fit_name_lhapdf}_0000.dat"
        pdf_dict_error[12] = mean_pdf
        pdf_dict_central[12] = std_pdf
        write_lhapdf_grid(x_vals, pdf_dict_central, path)
        path = f"{lhapdf_path}/{neutrino_pdf_fit_name_lhapdf}_{diff_lev_1}/{neutrino_pdf_fit_name_lhapdf}_0001.dat"
        write_lhapdf_grid(x_vals, pdf_dict_error, path)
    if num_output_layers == 2:
        customize_info_file(
            template_path, path, set_index, f"{particle_id_nu}, {particle_id_nub}", 2
        )
        mean_pdf_nu = np.mean(neutrino_pdfs_mu, axis=0)
        mean_pdf_nub = np.mean(neutrino_pdfs_mub, axis=0)
        std_pdf_nu = np.std(neutrino_pdfs_mu, axis=0)
        std_pdf_nub = np.std(neutrino_pdfs_mub, axis=0)
        path = f"{lhapdf_path}/{neutrino_pdf_fit_name_lhapdf}_{diff_lev_1}/{neutrino_pdf_fit_name_lhapdf}_0000.dat"

        pdf_dict_error[14] = std_pdf_nu
        pdf_dict_error[-14] = std_pdf_nub
        pdf_dict_central[14] = mean_pdf_nu
        pdf_dict_central[-14] = mean_pdf_nub
        write_lhapdf_grid(x_vals, pdf_dict_central, path)

        path = f"{lhapdf_path}/{neutrino_pdf_fit_name_lhapdf}_{diff_lev_1}/{neutrino_pdf_fit_name_lhapdf}_0001.dat"
        write_lhapdf_grid(x_vals, pdf_dict_error, path)

    if len(val_losses) > 0:
        val_losses = np.array(val_losses)
        with open(f"{dir_for_data}/val_losses.txt", "w") as f:
            np.savetxt(f, val_losses, delimiter=",")

    if chi_square_for_postfit.size != 0:
        with open(f"{dir_for_data}/pred.txt", "w") as f:
            np.savetxt(f, pred, delimiter=",")

        with open(f"{dir_for_data}/train_indices.txt", "w") as f:
            np.savetxt(f, train_indices, delimiter=",")
        with open(f"{dir_for_data}/val_indices.txt", "w") as f:
            np.savetxt(f, val_indices, delimiter=",")

        with open(f"{dir_for_data}/training_lengths.txt", "w") as f:
            np.savetxt(f, training_lengths, delimiter=",")

    if produce_plot:
        if num_output_layers == 1:
            from plot_comb_pdf_cl import plot

            sig_tot = np.sqrt(stat_error**2 + sys_error**2)
            plot(
                x_vals,
                neutrino_pdfs,
                data,
                N_event_pred,
                sig_tot,
                particle_id_nu,
                low_bin,
                high_bin,
                pdf,
                pdf_set,
                dir_for_data,
            )
        if num_output_layers == 2:
            from plot_nu_nub_cl import plot

            sig_tot = np.sqrt(stat_error**2 + sys_error**2)
            plot(
                x_vals,
                neutrino_pdfs_mu,
                neutrino_pdfs_mub,
                data,
                N_event_pred_nu,
                N_event_pred_nub,
                sig_tot,
                particle_id_nu,
                low_bin_mu,
                high_bin_mu,
                low_bin_mub,
                high_bin_mub,
                pdf,
                pdf_set,
                dir_for_data,
            )

๐ŸŽฏ Hyperparameter optimization

perform_hyperopt.py

hyperopt_comb.py

perform_fit(pred, range_alpha, range_beta, range_gamma, lr, wd, patience, x_alphas, fk_tables, binwidths, cov_matrix, extended_loss, activation_function, num_input_layers, num_output_layers, hidden_layers, x_vals, preproc, max_epochs, lag_mult_pos, lag_mult_int, x_int, num_folds)

Trains a neural network model to fit pseudo-data for electron neutrino event predictions using K-fold cross-validation and physics-informed constraints.

This function fits a parameterized neural network (PreprocessedMLP) using a custom loss function that incorporates statistical and physical constraints such as positivity and normalization. K-fold cross-validation is used to evaluate the model's generalization performance across different data splits. Random initialization of the preprocessing parameters (alpha, beta, gamma) enables exploration of a hyperparameter space.

Parameters

pred : List[np.ndarray] List containing prediction arrays (pseudo-data) for electron neutrino event counts. range_alpha : float Maximum value for randomly sampling the alpha preprocessing parameter. range_beta : float Maximum value for randomly sampling the beta preprocessing parameter. range_gamma : float Maximum value for randomly sampling the gamma preprocessing parameter. lr : float Learning rate for the Adam optimizer. wd : float Weight decay (L2 regularization) used during optimization. patience : int Early stopping patience threshold (number of epochs without improvement before stopping). x_alphas : torch.Tensor Input tensor used to evaluate the model's predicted PDFs. fk_tables : torch.Tensor Forward-folding kernel that maps PDF space to observable event space. binwidths : torch.Tensor Bin widths used to scale the convolved predictions. cov_matrix : np.ndarray Covariance matrix of the pseudo-data, used for uncertainty-aware loss computation. extended_loss : bool Whether to include extended physics constraints (e.g., positivity, integrals) in the loss. activation_function : str Name of the activation function to be used in the MLP (e.g., 'relu', 'tanh'). num_input_layers : int Number of input neurons to the network (typically 1 for univariate PDFs). num_output_layers : int Number of output neurons (typically 1 for electron neutrinos). hidden_layers : List[int] List of hidden layer sizes (e.g., [50, 50] for a 2-layer MLP with 50 neurons each). x_vals : np.ndarray Input values over which the final PDF predictions will be evaluated. preproc : str Type of preprocessing function used on the PDFs (e.g., 'log', 'powerlaw'). max_epochs : int Maximum number of training epochs per fold. lag_mult_pos : float Lagrange multiplier for the positivity constraint in the loss. lag_mult_int : float Lagrange multiplier for the integral (normalization) constraint in the loss. x_int : np.ndarray Input values used for evaluating the integral constraints on the PDF.

Returns

chi_squares : List[float] History of chi-squared values during training (saved periodically). N_event_pred : List[np.ndarray] Placeholder for predicted event counts (not currently populated in this version). neutrino_pdfs : List[np.ndarray] Placeholder for final PDF outputs (not currently populated in this version). model : PreprocessedMLP Trained neural network model from the final fold. chi_square_for_postfit : List[float] Final loss value (chi-squared) for each fold. train_indices : np.ndarray Indices of the training samples used in the final fold. val_indices : np.ndarray Indices of the validation samples used in the final fold. training_length : int Number of epochs completed during the final fold training. num_folds: int number of k-folds

Notes

  • The function uses 3-fold cross-validation to evaluate generalization.
  • Preprocessing parameters (alpha, beta, gamma) are randomized for each fold.
  • This implementation supports only one prediction channel and assumes symmetric treatment of integrals and positivity constraints.
  • N_event_pred and neutrino_pdfs are currently not returned meaningfully.
Source code in NN_fit/hyperopt_comb.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
def perform_fit(
    pred: List[np.ndarray],
    range_alpha: float,
    range_beta: float,
    range_gamma: float,
    lr: float,
    wd: float,
    patience: int,
    x_alphas: torch.Tensor,
    fk_tables: torch.Tensor,
    binwidths: torch.Tensor,
    cov_matrix: np.ndarray,
    extended_loss: bool,
    activation_function: str,
    num_input_layers: int,
    num_output_layers: int,
    hidden_layers: List[int],
    x_vals: np.ndarray,
    preproc: str,
    max_epochs: int,
    lag_mult_pos: float,
    lag_mult_int: float,
    x_int: np.ndarray,
    num_folds: int,
) -> Tuple[
    List[float],  # chi_squares
    List[np.ndarray],  # N_event_pred
    List[np.ndarray],  # neutrino_pdfs
    PreprocessedMLP,  # model (last accepted fit)
    List[float],  # chi_square_for_postfit
    np.ndarray,  # train_indices
    np.ndarray,  # val_indices
    int,  # training_length
]:
    """
    Trains a neural network model to fit pseudo-data for electron neutrino event predictions
    using K-fold cross-validation and physics-informed constraints.

    This function fits a parameterized neural network (PreprocessedMLP) using a custom loss function
    that incorporates statistical and physical constraints such as positivity and normalization.
    K-fold cross-validation is used to evaluate the model's generalization performance across
    different data splits. Random initialization of the preprocessing parameters (alpha, beta, gamma)
    enables exploration of a hyperparameter space.

    Parameters
    ----------
    pred : List[np.ndarray]
        List containing prediction arrays (pseudo-data) for electron neutrino event counts.
    range_alpha : float
        Maximum value for randomly sampling the alpha preprocessing parameter.
    range_beta : float
        Maximum value for randomly sampling the beta preprocessing parameter.
    range_gamma : float
        Maximum value for randomly sampling the gamma preprocessing parameter.
    lr : float
        Learning rate for the Adam optimizer.
    wd : float
        Weight decay (L2 regularization) used during optimization.
    patience : int
        Early stopping patience threshold (number of epochs without improvement before stopping).
    x_alphas : torch.Tensor
        Input tensor used to evaluate the model's predicted PDFs.
    fk_tables : torch.Tensor
        Forward-folding kernel that maps PDF space to observable event space.
    binwidths : torch.Tensor
        Bin widths used to scale the convolved predictions.
    cov_matrix : np.ndarray
        Covariance matrix of the pseudo-data, used for uncertainty-aware loss computation.
    extended_loss : bool
        Whether to include extended physics constraints (e.g., positivity, integrals) in the loss.
    activation_function : str
        Name of the activation function to be used in the MLP (e.g., 'relu', 'tanh').
    num_input_layers : int
        Number of input neurons to the network (typically 1 for univariate PDFs).
    num_output_layers : int
        Number of output neurons (typically 1 for electron neutrinos).
    hidden_layers : List[int]
        List of hidden layer sizes (e.g., [50, 50] for a 2-layer MLP with 50 neurons each).
    x_vals : np.ndarray
        Input values over which the final PDF predictions will be evaluated.
    preproc : str
        Type of preprocessing function used on the PDFs (e.g., 'log', 'powerlaw').
    max_epochs : int
        Maximum number of training epochs per fold.
    lag_mult_pos : float
        Lagrange multiplier for the positivity constraint in the loss.
    lag_mult_int : float
        Lagrange multiplier for the integral (normalization) constraint in the loss.
    x_int : np.ndarray
        Input values used for evaluating the integral constraints on the PDF.

    Returns
    -------
    chi_squares : List[float]
        History of chi-squared values during training (saved periodically).
    N_event_pred : List[np.ndarray]
        Placeholder for predicted event counts (not currently populated in this version).
    neutrino_pdfs : List[np.ndarray]
        Placeholder for final PDF outputs (not currently populated in this version).
    model : PreprocessedMLP
        Trained neural network model from the final fold.
    chi_square_for_postfit : List[float]
        Final loss value (chi-squared) for each fold.
    train_indices : np.ndarray
        Indices of the training samples used in the final fold.
    val_indices : np.ndarray
        Indices of the validation samples used in the final fold.
    training_length : int
        Number of epochs completed during the final fold training.
    num_folds: int
        number of k-folds

    Notes
    -----
    - The function uses 3-fold cross-validation to evaluate generalization.
    - Preprocessing parameters (alpha, beta, gamma) are randomized for each fold.
    - This implementation supports only one prediction channel and assumes
      symmetric treatment of integrals and positivity constraints.
    - `N_event_pred` and `neutrino_pdfs` are currently not returned meaningfully.
    """
    (
        chi_squares,
        k_fold_losses,
    ) = (
        [],
        [],
    )
    x_vals = torch.tensor(x_vals, dtype=torch.float32).view(-1, 1)

    x_int = torch.tensor(x_int, dtype=torch.float32).view(-1, 1)

    indices = np.arange(pred[0].shape[0])
    k = num_folds
    kf = KFold(n_splits=k, shuffle=True, random_state=42)

    folds = []
    for train_index, test_index in kf.split(indices):
        folds.append((train_index, test_index))

    for j in range(k):
        train_indices = folds[j][0]
        val_size = max(1, int(0.1 * len(train_indices)))
        val_indices = np.random.choice(train_indices, size=val_size, replace=False)
        k_fold_indices = folds[j][1]

        alpha, beta, gamma = (
            np.random.rand() * range_alpha,
            np.random.rand() * range_beta,
            np.random.rand() * range_gamma,
        )

        model = PreprocessedMLP(
            alpha,
            beta,
            gamma,
            activation_function,
            hidden_layers,
            num_input_layers,
            num_output_layers,
            preproc,
        )

        criterion = CustomLoss(
            extended_loss,
            num_output_layers,
        )

        optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=wd)

        losses = []

        model.train()
        best_loss = 1e13  # initial loss
        counter = 0
        training_length = 0

        while counter < patience:
            if max_epochs < training_length:
                break

            training_length += 1

            optimizer.zero_grad()
            y_pred = model(x_alphas)

            y_preds = torch.matmul(fk_tables, y_pred[:, 0]) * binwidths.flatten()
            y_preds = y_preds.squeeze()
            y_int_mu = model(x_int)
            y_int_mub = y_int_mu

            y_train = y_preds[train_indices]
            pred_train = pred[0][train_indices]
            cov_matrix_train = cov_matrix[train_indices][:, train_indices]

            loss = criterion(
                y_train,
                pred_train,
                cov_matrix_train,
                y_int_mu,
                y_int_mub,
                y_pred,
                x_int,
                lag_mult_pos,
                lag_mult_int,
            )
            loss.backward()

            y_val = y_preds[val_indices]
            pred_val = pred[0][val_indices]
            cov_matrix_val = cov_matrix[val_indices][:, val_indices]

            loss_val = criterion(
                y_val,
                pred_val,
                cov_matrix_val,
                y_int_mu,
                y_int_mub,
                y_pred,
                x_int,
                lag_mult_pos,
                lag_mult_int,
            )

            if training_length % 500 == 0:
                chi_squares.append(loss.detach().numpy())
                print(loss.detach().numpy())

            losses.append(loss.detach().numpy())
            optimizer.step()

            if loss_val < best_loss:
                best_loss = loss_val
                counter = 0
            else:
                counter += 1

        y_k_fold = y_preds[k_fold_indices]
        pred_k_fold = pred[0][k_fold_indices]
        cov_matrix_k_fold = cov_matrix[k_fold_indices][:, k_fold_indices]

        loss_k_fold = criterion(
            y_k_fold,
            pred_k_fold,
            cov_matrix_k_fold,
            y_int_mu,
            y_int_mub,
            y_pred,
            x_int,
            lag_mult_pos,
            lag_mult_int,
        )

        k_fold_loss = loss_k_fold.item()
        k_fold_losses.append(k_fold_loss)
        print("k_fold_losses")
        print(k_fold_losses)

    return np.mean(k_fold_losses)

hyperopt_nu_nub.py

perform_fit(pred, range_alpha, range_beta, range_gamma, lr, wd, patience, x_alphas, fk_tables_mu, fk_tables_mub, binwidths_mu, binwidths_mub, cov_matrix, extended_loss, activation_function, num_input_layers, num_output_layers, hidden_layers, x_vals, preproc, max_epochs, lag_mult_pos, lag_mult_int, x_int, num_folds)

Performs k-fold cross-validation training and evaluation of a neural network for muon neutrino flux prediction using Bayesian-inspired randomized hyperparameters.

This function trains a PreprocessedMLP model to predict neutrino and antineutrino event distributions, optimizing a custom loss function that incorporates physical constraints and covariance information. It performs training with early stopping based on validation loss, and evaluates generalization via a held-out fold.

Parameters:

pred : List[np.ndarray] List containing ground truth predicted neutrino events for training and evaluation. range_alpha : float Upper bound for random initialization of the alpha hyperparameter. range_beta : float Upper bound for random initialization of the beta hyperparameter. range_gamma : float Upper bound for random initialization of the gamma hyperparameter. lr : float Learning rate for the optimizer. wd : float Weight decay (L2 regularization) for the optimizer. patience : int Number of epochs to wait without validation improvement before early stopping. x_alphas : torch.Tensor Input tensor used for prediction by the model. fk_tables_mu : torch.Tensor Forward-folding kernel table for muon neutrinos. fk_tables_mub : torch.Tensor Forward-folding kernel table for anti-muon neutrinos. binwidths_mu : torch.Tensor Bin widths used for muon neutrino predictions. binwidths_mub : torch.Tensor Bin widths used for anti-muon neutrino predictions. cov_matrix : np.ndarray Covariance matrix for uncertainty propagation in loss computation. extended_loss : bool Whether to include extended regularization terms in the custom loss. activation_function : str Activation function to use in the model (e.g., "relu", "tanh"). num_input_layers : int Number of input features/layers for the model. num_output_layers : int Number of outputs (e.g., neutrino types). hidden_layers : List[int] Sizes of hidden layers in the MLP. x_vals : np.ndarray Input data values used for training and evaluation. preproc : str Type of input preprocessing to apply (e.g., "standard", "log"). max_epochs : int Maximum number of training epochs per fold. lag_mult_pos : float Lagrange multiplier weight for positivity constraint in the loss. lag_mult_int : float Lagrange multiplier weight for integral constraint in the loss. x_int : np.ndarray Points at which the integrals for regularization are evaluated. num_folds: int Number of k-folds

Returns:

Tuple[ List[float], # chi_squares over training List[np.ndarray], # Predicted neutrino event counts List[np.ndarray], # Predicted neutrino PDFs PreprocessedMLP, # Trained model instance List[float], # Chi-square values for post-fit analysis np.ndarray, # Training indices from final fold np.ndarray, # Validation indices from final fold int # Total number of training iterations in last fold ]

Source code in NN_fit/hyperopt_nu_nub.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
def perform_fit(
    pred: List[np.ndarray],
    range_alpha: float,
    range_beta: float,
    range_gamma: float,
    lr: float,
    wd: float,
    patience: int,
    x_alphas: torch.Tensor,
    fk_tables_mu: torch.Tensor,
    fk_tables_mub: torch.Tensor,
    binwidths_mu: torch.Tensor,
    binwidths_mub: torch.Tensor,
    cov_matrix: np.ndarray,
    extended_loss: bool,
    activation_function: str,
    num_input_layers: int,
    num_output_layers: int,
    hidden_layers: List[int],
    x_vals: np.ndarray,
    preproc: str,
    max_epochs: int,
    lag_mult_pos: float,
    lag_mult_int: float,
    x_int: np.ndarray,
    num_folds: int,
) -> Tuple[
    List[float],  # chi_squares
    List[np.ndarray],  # N_event_pred
    List[np.ndarray],  # neutrino_pdfs
    PreprocessedMLP,  # model (last accepted fit)
    List[float],  # chi_square_for_postfit
    np.ndarray,  # train_indices
    np.ndarray,  # val_indices
    int,  # training_length
]:
    """
    Performs k-fold cross-validation training and evaluation of a neural network
    for muon neutrino flux prediction using Bayesian-inspired randomized hyperparameters.

    This function trains a `PreprocessedMLP` model to predict neutrino and antineutrino
    event distributions, optimizing a custom loss function that incorporates physical
    constraints and covariance information. It performs training with early stopping
    based on validation loss, and evaluates generalization via a held-out fold.

    Parameters:
    ----------
    pred : List[np.ndarray]
        List containing ground truth predicted neutrino events for training and evaluation.
    range_alpha : float
        Upper bound for random initialization of the alpha hyperparameter.
    range_beta : float
        Upper bound for random initialization of the beta hyperparameter.
    range_gamma : float
        Upper bound for random initialization of the gamma hyperparameter.
    lr : float
        Learning rate for the optimizer.
    wd : float
        Weight decay (L2 regularization) for the optimizer.
    patience : int
        Number of epochs to wait without validation improvement before early stopping.
    x_alphas : torch.Tensor
        Input tensor used for prediction by the model.
    fk_tables_mu : torch.Tensor
        Forward-folding kernel table for muon neutrinos.
    fk_tables_mub : torch.Tensor
        Forward-folding kernel table for anti-muon neutrinos.
    binwidths_mu : torch.Tensor
        Bin widths used for muon neutrino predictions.
    binwidths_mub : torch.Tensor
        Bin widths used for anti-muon neutrino predictions.
    cov_matrix : np.ndarray
        Covariance matrix for uncertainty propagation in loss computation.
    extended_loss : bool
        Whether to include extended regularization terms in the custom loss.
    activation_function : str
        Activation function to use in the model (e.g., "relu", "tanh").
    num_input_layers : int
        Number of input features/layers for the model.
    num_output_layers : int
        Number of outputs (e.g., neutrino types).
    hidden_layers : List[int]
        Sizes of hidden layers in the MLP.
    x_vals : np.ndarray
        Input data values used for training and evaluation.
    preproc : str
        Type of input preprocessing to apply (e.g., "standard", "log").
    max_epochs : int
        Maximum number of training epochs per fold.
    lag_mult_pos : float
        Lagrange multiplier weight for positivity constraint in the loss.
    lag_mult_int : float
        Lagrange multiplier weight for integral constraint in the loss.
    x_int : np.ndarray
        Points at which the integrals for regularization are evaluated.
    num_folds: int
        Number of k-folds

    Returns:
    -------
    Tuple[
        List[float],           # chi_squares over training
        List[np.ndarray],      # Predicted neutrino event counts
        List[np.ndarray],      # Predicted neutrino PDFs
        PreprocessedMLP,       # Trained model instance
        List[float],           # Chi-square values for post-fit analysis
        np.ndarray,            # Training indices from final fold
        np.ndarray,            # Validation indices from final fold
        int                    # Total number of training iterations in last fold
    ]
    """
    (
        chi_squares,
        k_fold_losses,
    ) = (
        [],
        [],
    )
    x_vals = torch.tensor(x_vals, dtype=torch.float32).view(-1, 1)
    x_int = torch.tensor(x_int, dtype=torch.float32).view(-1, 1)

    indices = np.arange(pred[0].shape[0])
    k = num_folds
    kf = KFold(n_splits=k, shuffle=True, random_state=42)

    folds = []
    for train_index, test_index in kf.split(indices):
        folds.append((train_index, test_index))

    for j in range(k):
        train_indices = folds[j][0]
        val_size = max(1, int(0.1 * len(train_indices)))
        val_indices = np.random.choice(train_indices, size=val_size, replace=False)
        k_fold_indices = folds[j][1]

        alpha, beta, gamma = (
            np.random.rand() * range_alpha,
            np.random.rand() * range_beta,
            np.random.rand() * range_gamma,
        )

        model = PreprocessedMLP(
            alpha,
            beta,
            gamma,
            activation_function,
            hidden_layers,
            num_input_layers,
            num_output_layers,
            preproc,
        )

        criterion = CustomLoss(
            extended_loss,
            num_output_layers,
        )

        optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=wd)

        losses = []

        model.train()
        best_loss = 1e13  # initial loss
        counter = 0
        training_length = 0

        while counter < patience:
            if max_epochs < training_length:
                break

            training_length += 1

            optimizer.zero_grad()
            y_pred = model(x_alphas)

            y_pred_mu = (
                torch.matmul(fk_tables_mu, y_pred[:, 0]) * binwidths_mu.flatten()
            )
            y_pred_mub = (
                torch.matmul(fk_tables_mub, y_pred[:, 1]) * binwidths_mub.flatten()
            )

            y_pred_mu = y_pred_mu.squeeze()
            y_pred_mub = y_pred_mub.squeeze()

            y_preds = torch.hstack((y_pred_mu, y_pred_mub))

            y_train = y_preds[train_indices]
            pred_train = pred[0][train_indices]
            cov_matrix_train = cov_matrix[train_indices][:, train_indices]
            y_int_mu = model(x_int)[:, 0]
            y_int_mub = model(x_int)[:, 1]

            loss = criterion(
                y_train,
                pred_train,
                cov_matrix_train,
                y_int_mu,
                y_int_mub,
                y_pred,
                x_int,
                lag_mult_pos,
                lag_mult_int,
            )
            loss.backward()
            # print(loss)

            y_val = y_preds[val_indices]
            pred_val = pred[0][val_indices]
            cov_matrix_val = cov_matrix[val_indices][:, val_indices]

            loss_val = criterion(
                y_val,
                pred_val,
                cov_matrix_val,
                y_int_mu,
                y_int_mub,
                y_pred,
                x_int,
                lag_mult_pos,
                lag_mult_int,
            )

            if training_length % 500 == 0:
                chi_squares.append(loss.detach().numpy())
                print(loss.detach().numpy())

            losses.append(loss.detach().numpy())
            optimizer.step()

            if loss_val < best_loss:
                best_loss = loss_val
                counter = 0
            else:
                counter += 1

        y_k_fold = y_preds[k_fold_indices]
        pred_k_fold = pred[0][k_fold_indices]
        cov_matrix_k_fold = cov_matrix[k_fold_indices][:, k_fold_indices]

        loss_k_fold = criterion(
            y_k_fold,
            pred_k_fold,
            cov_matrix_k_fold,
            y_int_mu,
            y_int_mub,
            y_pred,
            x_int,
            lag_mult_pos,
            lag_mult_int,
        )

        k_fold_loss = loss_k_fold.item()
        k_fold_losses.append(k_fold_loss)
        print("k_fold_losses")
        print(k_fold_losses)

    return np.mean(k_fold_losses)

๐Ÿง  Model Architecture

structure_NN.py

CustomLoss

Bases: Module

Custom loss function wrapper supporting multiple modes: raw chi-squared, or extended with constraints.

Parameters

extended_loss : bool If True, uses extended loss with positivity and normalization constraints. num_output_layers : int Determines loss mode: 1 for combined ฮฝ+ฮฝฬ„, 2 for separate ฮฝ and ฮฝฬ„ losses.

Methods

forward(pred, data, cov_matrix, small_x_point1, small_x_point2, model, x_int, lag_mult_pos, lag_mult_int) Computes the loss.

Source code in NN_fit/structure_NN.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
class CustomLoss(nn.Module):
    """
    Custom loss function wrapper supporting multiple modes: raw chi-squared, or extended with constraints.

    Parameters
    ----------
    extended_loss : bool
        If True, uses extended loss with positivity and normalization constraints.
    num_output_layers : int
        Determines loss mode: 1 for combined ฮฝ+ฮฝฬ„, 2 for separate ฮฝ and ฮฝฬ„ losses.

    Methods
    -------
    forward(pred, data, cov_matrix, small_x_point1, small_x_point2, model, x_int, lag_mult_pos, lag_mult_int)
        Computes the loss.
    """

    def __init__(self, extended_loss: bool, num_output_layers: int):
        super(CustomLoss, self).__init__()
        self.extended_loss = extended_loss
        self.num_output_layers = num_output_layers

    def forward(
        self,
        pred: torch.Tensor,
        data: torch.Tensor,
        cov_matrix: torch.Tensor,
        small_x_point1: torch.Tensor,
        small_x_point2: torch.Tensor,
        model: torch.Tensor,
        x_int: torch.Tensor,
        lag_mult_pos: float,
        lag_mult_int: float,
    ) -> torch.Tensor:
        """
        Computes the loss function.

        Parameters
        ----------
        pred : torch.Tensor
            Model prediction.
        data : torch.Tensor
            Target data (pseudo-data).
        cov_matrix : np.ndarray or torch.Tensor
            Covariance matrix for data.
        small_x_point1 : torch.Tensor
            Neural prediction before preprocessing (ฮฝ or combined).
        small_x_point2 : torch.Tensor
            Neural prediction before preprocessing (ฮฝฬ„, if using two outputs).
        model : torch.nn.Module
            Reference to the model (used for constraints).
        x_int : torch.Tensor
            x-points for integral constraint.
        lag_mult_pos : float
            Lagrange multiplier for positivity.
        lag_mult_int : float
            Lagrange multiplier for integral constraint.

        Returns
        -------
        torch.Tensor
            Final loss scalar.
        """
        if self.extended_loss:
            if self.num_output_layers == 1:
                loss = complete_loss_fct_comb(
                    pred,
                    data,
                    cov_matrix,
                    small_x_point1,
                    model,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )
            if self.num_output_layers == 2:
                loss = complete_loss_fct_nu_nub(
                    pred,
                    data,
                    cov_matrix,
                    small_x_point1,
                    small_x_point2,
                    model,
                    x_int,
                    lag_mult_pos,
                    lag_mult_int,
                )

        else:
            loss = raw_loss_fct(pred, data, cov_matrix)
        return loss

forward(pred, data, cov_matrix, small_x_point1, small_x_point2, model, x_int, lag_mult_pos, lag_mult_int)

Computes the loss function.

Parameters

pred : torch.Tensor Model prediction. data : torch.Tensor Target data (pseudo-data). cov_matrix : np.ndarray or torch.Tensor Covariance matrix for data. small_x_point1 : torch.Tensor Neural prediction before preprocessing (ฮฝ or combined). small_x_point2 : torch.Tensor Neural prediction before preprocessing (ฮฝฬ„, if using two outputs). model : torch.nn.Module Reference to the model (used for constraints). x_int : torch.Tensor x-points for integral constraint. lag_mult_pos : float Lagrange multiplier for positivity. lag_mult_int : float Lagrange multiplier for integral constraint.

Returns

torch.Tensor Final loss scalar.

Source code in NN_fit/structure_NN.py
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
def forward(
    self,
    pred: torch.Tensor,
    data: torch.Tensor,
    cov_matrix: torch.Tensor,
    small_x_point1: torch.Tensor,
    small_x_point2: torch.Tensor,
    model: torch.Tensor,
    x_int: torch.Tensor,
    lag_mult_pos: float,
    lag_mult_int: float,
) -> torch.Tensor:
    """
    Computes the loss function.

    Parameters
    ----------
    pred : torch.Tensor
        Model prediction.
    data : torch.Tensor
        Target data (pseudo-data).
    cov_matrix : np.ndarray or torch.Tensor
        Covariance matrix for data.
    small_x_point1 : torch.Tensor
        Neural prediction before preprocessing (ฮฝ or combined).
    small_x_point2 : torch.Tensor
        Neural prediction before preprocessing (ฮฝฬ„, if using two outputs).
    model : torch.nn.Module
        Reference to the model (used for constraints).
    x_int : torch.Tensor
        x-points for integral constraint.
    lag_mult_pos : float
        Lagrange multiplier for positivity.
    lag_mult_int : float
        Lagrange multiplier for integral constraint.

    Returns
    -------
    torch.Tensor
        Final loss scalar.
    """
    if self.extended_loss:
        if self.num_output_layers == 1:
            loss = complete_loss_fct_comb(
                pred,
                data,
                cov_matrix,
                small_x_point1,
                model,
                x_int,
                lag_mult_pos,
                lag_mult_int,
            )
        if self.num_output_layers == 2:
            loss = complete_loss_fct_nu_nub(
                pred,
                data,
                cov_matrix,
                small_x_point1,
                small_x_point2,
                model,
                x_int,
                lag_mult_pos,
                lag_mult_int,
            )

    else:
        loss = raw_loss_fct(pred, data, cov_matrix)
    return loss

CustomPreprocessing

Bases: Module

Applies a parameterized functional preprocessing to the input based on power-law forms.

The form is

f(x) = ฮณ * (1 - x)^ฮฒ * x^(1 - ฮฑ)

Parameters

alpha : float Initial value for ฮฑ parameter. beta : float Initial value for ฮฒ parameter. gamma : float Initial value for ฮณ parameter. preproc : bool If True, alpha, beta, and gamma are learnable parameters. Otherwise, they are fixed.

Notes

  • Input values are clamped between [1e-6, 1 - 1e-6] for numerical stability.
Source code in NN_fit/structure_NN.py
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
class CustomPreprocessing(nn.Module):
    """
    Applies a parameterized functional preprocessing to the input based on power-law forms.

    The form is:
        f(x) = ฮณ * (1 - x)^ฮฒ * x^(1 - ฮฑ)

    Parameters
    ----------
    alpha : float
        Initial value for ฮฑ parameter.
    beta : float
        Initial value for ฮฒ parameter.
    gamma : float
        Initial value for ฮณ parameter.
    preproc : bool
        If True, alpha, beta, and gamma are learnable parameters. Otherwise, they are fixed.

    Notes
    -----
    - Input values are clamped between [1e-6, 1 - 1e-6] for numerical stability.
    """

    def __init__(
        self,
        alpha: float,
        beta: float,
        gamma: float,
        preproc: bool,
    ):
        super(CustomPreprocessing, self).__init__()

        if preproc:
            self.alpha = nn.Parameter(
                torch.tensor(alpha, dtype=torch.float32, requires_grad=True)
            )
            self.beta = nn.Parameter(
                torch.tensor(beta, dtype=torch.float32, requires_grad=True)
            )
            self.gamma = nn.Parameter(
                torch.tensor(gamma, dtype=torch.float32, requires_grad=True)
            )
        else:
            self.register_buffer("alpha", torch.tensor(alpha, dtype=torch.float32))
            self.register_buffer("beta", torch.tensor(beta, dtype=torch.float32))
            self.register_buffer("gamma", torch.tensor(gamma, dtype=torch.float32))

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """
        Applies the preprocessing function.

        Parameters
        ----------
        x : torch.Tensor
            Input tensor of shape (N, 1).

        Returns
        -------
        torch.Tensor
            Preprocessed output of same shape.
        """
        x = torch.clamp(x, 1e-6, 1 - 1e-6)
        beta = torch.nn.functional.relu(self.beta)

        return self.gamma * (1 - x) ** beta * x ** (1 - self.alpha)

forward(x)

Applies the preprocessing function.

Parameters

x : torch.Tensor Input tensor of shape (N, 1).

Returns

torch.Tensor Preprocessed output of same shape.

Source code in NN_fit/structure_NN.py
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
def forward(self, x: torch.Tensor) -> torch.Tensor:
    """
    Applies the preprocessing function.

    Parameters
    ----------
    x : torch.Tensor
        Input tensor of shape (N, 1).

    Returns
    -------
    torch.Tensor
        Preprocessed output of same shape.
    """
    x = torch.clamp(x, 1e-6, 1 - 1e-6)
    beta = torch.nn.functional.relu(self.beta)

    return self.gamma * (1 - x) ** beta * x ** (1 - self.alpha)

PreprocessedMLP

Bases: Module

A neural network combining a preprocessing layer with an MLP.

Parameters

alpha, beta, gamma : float Parameters for the preprocessing function. activation_function : list of str Activation functions for the MLP. hidden_layers : list of int Sizes of hidden layers. num_input_layers : int Number of input features. num_output_layers : int Number of output features. preproc : bool Whether to use preprocessing.

Attributes

preprocessing : CustomPreprocessing The preprocessing module applied before the MLP. mlp : SimplePerceptron The neural network model applied to the preprocessed inputs.

Methods

forward(x) Applies preprocessing (if enabled) followed by the MLP. neuralnet(x) Returns raw MLP output (no preprocessing). preproces(x) Returns the preprocessing factor only.

Source code in NN_fit/structure_NN.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
class PreprocessedMLP(nn.Module):
    """
    A neural network combining a preprocessing layer with an MLP.

    Parameters
    ----------
    alpha, beta, gamma : float
        Parameters for the preprocessing function.
    activation_function : list of str
        Activation functions for the MLP.
    hidden_layers : list of int
        Sizes of hidden layers.
    num_input_layers : int
        Number of input features.
    num_output_layers : int
        Number of output features.
    preproc : bool
        Whether to use preprocessing.

    Attributes
    ----------
    preprocessing : CustomPreprocessing
        The preprocessing module applied before the MLP.
    mlp : SimplePerceptron
        The neural network model applied to the preprocessed inputs.

    Methods
    -------
    forward(x)
        Applies preprocessing (if enabled) followed by the MLP.
    neuralnet(x)
        Returns raw MLP output (no preprocessing).
    preproces(x)
        Returns the preprocessing factor only.
    """

    def __init__(
        self,
        alpha: float,
        beta: float,
        gamma: float,
        activation_function: list[str],
        hidden_layers: list[int],
        num_input_layers: int,
        num_output_layers: int,
        preproc: bool,
    ):
        super(PreprocessedMLP, self).__init__()

        self.preprocessing = CustomPreprocessing(alpha, beta, gamma, preproc)
        self.mlp = SimplePerceptron(
            activation_function, num_input_layers, hidden_layers, num_output_layers
        )
        self.preproc = preproc

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """
        Full forward pass through preprocessing and MLP.

        Parameters
        ----------
        x : torch.Tensor
            Input tensor.

        Returns
        -------
        torch.Tensor
            Output of the combined preprocessing and MLP.
        """
        if self.preproc:
            f_preproc = self.preprocessing(x)
            f_NN = self.mlp(x)
            f_nu = f_preproc * f_NN
            return f_nu
        else:
            f_NN = self.mlp(x)
            return f_NN

    def neuralnet(self, x: torch.Tensor) -> torch.Tensor:
        """
        Forward pass through only the MLP (no preprocessing).

        Parameters
        ----------
        x : torch.Tensor

        Returns
        -------
        torch.Tensor
        """
        f_NN = self.mlp(x)
        return f_NN

    def preproces(self, x: torch.Tensor) -> torch.Tensor:
        """
        Returns the preprocessing term ฮณ * (1 - x)^ฮฒ * x^(1 - ฮฑ)

        Parameters
        ----------
        x : torch.Tensor

        Returns
        -------
        torch.Tensor
        """
        f_preproc = self.preprocessing(x)
        return f_preproc

forward(x)

Full forward pass through preprocessing and MLP.

Parameters

x : torch.Tensor Input tensor.

Returns

torch.Tensor Output of the combined preprocessing and MLP.

Source code in NN_fit/structure_NN.py
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
def forward(self, x: torch.Tensor) -> torch.Tensor:
    """
    Full forward pass through preprocessing and MLP.

    Parameters
    ----------
    x : torch.Tensor
        Input tensor.

    Returns
    -------
    torch.Tensor
        Output of the combined preprocessing and MLP.
    """
    if self.preproc:
        f_preproc = self.preprocessing(x)
        f_NN = self.mlp(x)
        f_nu = f_preproc * f_NN
        return f_nu
    else:
        f_NN = self.mlp(x)
        return f_NN

neuralnet(x)

Forward pass through only the MLP (no preprocessing).

Parameters

x : torch.Tensor

Returns

torch.Tensor

Source code in NN_fit/structure_NN.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
def neuralnet(self, x: torch.Tensor) -> torch.Tensor:
    """
    Forward pass through only the MLP (no preprocessing).

    Parameters
    ----------
    x : torch.Tensor

    Returns
    -------
    torch.Tensor
    """
    f_NN = self.mlp(x)
    return f_NN

preproces(x)

Returns the preprocessing term ฮณ * (1 - x)^ฮฒ * x^(1 - ฮฑ)

Parameters

x : torch.Tensor

Returns

torch.Tensor

Source code in NN_fit/structure_NN.py
245
246
247
248
249
250
251
252
253
254
255
256
257
258
def preproces(self, x: torch.Tensor) -> torch.Tensor:
    """
    Returns the preprocessing term ฮณ * (1 - x)^ฮฒ * x^(1 - ฮฑ)

    Parameters
    ----------
    x : torch.Tensor

    Returns
    -------
    torch.Tensor
    """
    f_preproc = self.preprocessing(x)
    return f_preproc

SimplePerceptron

Bases: Module

A feedforward multilayer perceptron (MLP) with configurable activation functions and layer sizes.

Parameters

act_functions : list of str List of activation function names (e.g., ['relu', 'relu', 'softplus']) for each layer. num_input_layers : int Number of input features. hidden_layers : list of int List of integers specifying the number of units in each hidden layer. num_output_layers : int Number of output features.

Attributes

layers : nn.Sequential Composed list of linear and activation layers forming the MLP.

Notes

  • Supported activation functions: 'relu', 'softplus'.
  • The last activation is applied after the final output layer.
Source code in NN_fit/structure_NN.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
class SimplePerceptron(torch.nn.Module):
    """
    A feedforward multilayer perceptron (MLP) with configurable activation functions and layer sizes.

    Parameters
    ----------
    act_functions : list of str
        List of activation function names (e.g., ['relu', 'relu', 'softplus']) for each layer.
    num_input_layers : int
        Number of input features.
    hidden_layers : list of int
        List of integers specifying the number of units in each hidden layer.
    num_output_layers : int
        Number of output features.

    Attributes
    ----------
    layers : nn.Sequential
        Composed list of linear and activation layers forming the MLP.

    Notes
    -----
    - Supported activation functions: 'relu', 'softplus'.
    - The last activation is applied after the final output layer.
    """

    def __init__(
        self,
        act_functions: list[str],
        num_input_layers: int,
        hidden_layers: list[int],
        num_output_layers: int,
    ):
        super().__init__()

        activation_map = {
            "softplus": nn.Softplus,
            "relu": nn.ReLU,
            "tanh": nn.Tanh,
            "sigmoid": nn.Sigmoid,
        }

        activation_names = act_functions

        act_functions = [activation_map[name]() for name in activation_names]

        layers = []

        layers.append(nn.Linear(num_input_layers, hidden_layers[0]))
        layers.append(act_functions[0])

        for i in range(0, len(hidden_layers) - 1):
            layers.append(nn.Linear(hidden_layers[i], hidden_layers[i + 1]))
            layers.append(act_functions[i])

        layers.append(nn.Linear(hidden_layers[-1], num_output_layers))
        layers.append(act_functions[-1])

        self.layers = nn.Sequential(*layers)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """
        Forward pass through the MLP.

        Parameters
        ----------
        x : torch.Tensor
            Input tensor of shape (batch_size, num_input_layers)

        Returns
        -------
        torch.Tensor
            Output tensor of shape (batch_size, num_output_layers)
        """
        return self.layers(x)

forward(x)

Forward pass through the MLP.

Parameters

x : torch.Tensor Input tensor of shape (batch_size, num_input_layers)

Returns

torch.Tensor Output tensor of shape (batch_size, num_output_layers)

Source code in NN_fit/structure_NN.py
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def forward(self, x: torch.Tensor) -> torch.Tensor:
    """
    Forward pass through the MLP.

    Parameters
    ----------
    x : torch.Tensor
        Input tensor of shape (batch_size, num_input_layers)

    Returns
    -------
    torch.Tensor
        Output tensor of shape (batch_size, num_output_layers)
    """
    return self.layers(x)

๐Ÿ“ˆ Plotting and Visualization

plot_comb_pdf_cl.py

plot_diff_level1_comb.py

plot_for_diff_level_1_shifts_nu_nub.py

plot_nu_nub_cl.py


๐Ÿ“ฆ PDF Output

write_all_pdfs_to_lhapdf.py

customize_info_file(template_path, output_path, set_index, flavor, num_members)

Creates a customized LHAPDF .info file from a template by replacing placeholders.

Parameters:

template_path : str Path to the .info template file containing placeholders (e.g., SETINDEX, FLAVOR). output_path : str Path to the output .info file to be generated. set_index : int Unique identifier for the PDF set (used to replace "SETINDEX" in the template). flavor : str Flavor content to be listed in the info file (used to replace "FLAVOR"). Can be a single PDG ID (e.g., "12") or a comma-separated list (e.g., "14, -14"). num_members : int Number of PDF members or replicas (used to replace the "NumMembers" field).

Notes:

  • The function assumes the template has default "NumMembers: 1000" and replaces that value.
  • All replaced content is written to the specified output path.
Source code in NN_fit/write_all_pdfs_to_lhapdf.py
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
def customize_info_file(
    template_path: str, output_path: str, set_index: int, flavor: str, num_members: int
) -> None:
    """
    Creates a customized LHAPDF `.info` file from a template by replacing placeholders.

    Parameters:
    -----------
    template_path : str
        Path to the `.info` template file containing placeholders (e.g., SETINDEX, FLAVOR).
    output_path : str
        Path to the output `.info` file to be generated.
    set_index : int
        Unique identifier for the PDF set (used to replace "SETINDEX" in the template).
    flavor : str
        Flavor content to be listed in the info file (used to replace "FLAVOR").
        Can be a single PDG ID (e.g., "12") or a comma-separated list (e.g., "14, -14").
    num_members : int
        Number of PDF members or replicas (used to replace the "NumMembers" field).

    Notes:
    ------
    - The function assumes the template has default "NumMembers: 1000" and replaces that value.
    - All replaced content is written to the specified output path.
    """
    with open(template_path, "r") as file:
        content = file.read()

    content = content.replace("SETINDEX", str(set_index))
    content = content.replace("FLAVOR", str(flavor))
    content = content.replace("NumMembers: 1000", f"NumMembers: {str(num_members)}")

    with open(output_path, "w") as file:
        file.write(content)

write_lhapdf_grid(xgrid, pdf_dict, path)

Writes a set of neutrino PDFs to a file in LHAPDF grid format (lhagrid1).

This function formats and saves PDF data into a file readable by LHAPDF tools, using the specified x-grid and dictionary of parton distribution functions.

Parameters:

xgrid : array-like Array of Bjorken-x values at which PDFs are evaluated. pdf_dict : dict Dictionary mapping particle IDs (PDG codes) to arrays of PDF values. Each value should be an array of length equal to the length of xgrid. path : str Path to the output .dat file where the grid will be written.

Notes:

  • The output format follows LHAPDF's lhagrid1 specification.
  • Each PDF line is duplicated, as required by the LHAPDF grid format.
  • The order of flavors is sorted by PDG code.
Source code in NN_fit/write_all_pdfs_to_lhapdf.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def write_lhapdf_grid(
    xgrid: Union[np.ndarray, List[float]],
    pdf_dict: Dict[int, Union[np.ndarray, List[float]]],
    path: str,
) -> None:
    """
    Writes a set of neutrino PDFs to a file in LHAPDF grid format (lhagrid1).

    This function formats and saves PDF data into a file readable by LHAPDF tools,
    using the specified x-grid and dictionary of parton distribution functions.

    Parameters:
    -----------
    xgrid : array-like
        Array of Bjorken-x values at which PDFs are evaluated.
    pdf_dict : dict
        Dictionary mapping particle IDs (PDG codes) to arrays of PDF values.
        Each value should be an array of length equal to the length of `xgrid`.
    path : str
        Path to the output `.dat` file where the grid will be written.

    Notes:
    ------
    - The output format follows LHAPDF's `lhagrid1` specification.
    - Each PDF line is duplicated, as required by the LHAPDF grid format.
    - The order of flavors is sorted by PDG code.
    """
    with open(path, "w") as f:
        f.write("PdfType: replica\n")
        f.write("Format: lhagrid1\n")
        f.write("---\n")

        f.write("  " + "  ".join(f"{val:.8e}" for val in xgrid) + "\n")

        f.write("0.1E+001 0.1E+007\n")

        pids = sorted(pdf_dict.keys())
        f.write(" ".join(str(pid) for pid in pids) + "\n")

        num_x = len(xgrid)
        for i in range(num_x):
            line = " ".join(f"{pdf_dict[pid][i]:.14e}" for pid in pids)
            f.write(line + "\n")
            f.write(line + "\n")
        f.write("---\n")

๐Ÿงช Data and Reading

MC_data_reps.py

generate_MC_replicas(REPLICAS, data, sig_sys, sig_stat, seed, fit_level)

Generate level 2 data MC replicas for the NN fit by adding a level 1 and then a level 2 gaussian noise to the data

Returns:

Name Type Description
tuple Tuple[List[Tensor], List[Tensor], List[Tensor]]

level 0,1 and 2 data

Source code in NN_fit/MC_data_reps.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
def generate_MC_replicas(
    REPLICAS: int,
    data: np.ndarray,
    sig_sys: np.ndarray,
    sig_stat: np.ndarray,
    seed: int,
    fit_level: int,
) -> Tuple[List[torch.Tensor], List[torch.Tensor], List[torch.Tensor]]:
    """Generate level 2 data MC replicas for the NN fit by adding a level 1 and then a level 2 gaussian noise to the data

    Returns:
        tuple: level 0,1 and 2 data
    """

    level0, level1, level2 = [], [], []
    rng_level1 = np.random.default_rng(seed=seed)
    r_sys_1 = rng_level1.normal(0, 1, len(data)) * sig_sys
    r_stat_1 = rng_level1.normal(0, 1, len(data)) * sig_stat

    for _ in range(REPLICAS):
        if fit_level == 1:
            r_sys_1 = np.random.normal(0, 1, len(data)) * sig_sys
            r_stat_1 = np.random.normal(0, 1, len(data)) * sig_stat
            data_level1 = data + r_sys_1 + r_stat_1
        else:
            data_level1 = data + r_sys_1 + r_stat_1
        r_sys_2 = np.random.normal(0, 1, len(data)) * sig_sys
        r_stat_2 = np.random.normal(0, 1, len(data)) * sig_stat

        data_level2 = data_level1 + r_sys_2 + r_stat_2

        level0.append(torch.tensor(data, dtype=torch.float32))
        level1.append(torch.tensor(data_level1, dtype=torch.float32))
        level2.append(torch.tensor(data_level2, dtype=torch.float32))

    return level0, level1, level2

read_faserv_pdf.py

read_pdf(pdf, x_vals, particle, set)

Reads the parton distribution function (PDF) values for a given particle at specified momentum fractions and energy scale using LHAPDF.

Parameters

pdf : str Name of the PDF set to load. x_vals : np.ndarray Array of momentum fraction values (x) at which to evaluate the PDF. particle : int Particle ID (PDG code) for which the PDF is evaluated. set : int Specific member or set number within the PDF.

Returns

Tuple[np.ndarray, np.ndarray] A tuple containing: - pdf_vals: np.ndarray of PDF values normalized by x_vals. - x_vals: The input array of momentum fractions.

Source code in NN_fit/read_faserv_pdf.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
def read_pdf(
    pdf: str, x_vals: np.ndarray, particle: int, set: int
) -> Tuple[np.ndarray, np.ndarray]:
    """
    Reads the parton distribution function (PDF) values for a given particle
    at specified momentum fractions and energy scale using LHAPDF.

    Parameters
    ----------
    pdf : str
        Name of the PDF set to load.
    x_vals : np.ndarray
        Array of momentum fraction values (x) at which to evaluate the PDF.
    particle : int
        Particle ID (PDG code) for which the PDF is evaluated.
    set : int
        Specific member or set number within the PDF.

    Returns
    -------
    Tuple[np.ndarray, np.ndarray]
        A tuple containing:
        - pdf_vals: np.ndarray of PDF values normalized by x_vals.
        - x_vals: The input array of momentum fractions.
    """
    pid = particle
    Q2 = 10
    pdf = lhapdf.mkPDF(pdf, set)
    pdf_vals = [pdf.xfxQ2(pid, x, Q2) for x in x_vals]
    pdf_vals = np.array(pdf_vals)
    pdf_vals /= x_vals
    return pdf_vals, x_vals

read_fk_table.py

get_fk_table(filename, parent_dir)

Loads a FastKernel (FK) table and corresponding x_alpha nodes from text files, converts them to PyTorch tensors, and reshapes x_alpha to a column vector.

Parameters

filename : str Name of the FK table file to load (relative to parent_dir). parent_dir : str Path to the directory containing the FK table file and relative location of the x_alpha file.

Returns

Tuple[torch.Tensor, torch.Tensor] - x_alpha: Tensor of shape (N, 1) containing x_alpha grid nodes. - fk_table: Tensor containing the FK table data.

Source code in NN_fit/read_fk_table.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
def get_fk_table(filename: str, parent_dir: str) -> Tuple[torch.Tensor, torch.Tensor]:
    """
    Loads a FastKernel (FK) table and corresponding x_alpha nodes from text files,
    converts them to PyTorch tensors, and reshapes x_alpha to a column vector.

    Parameters
    ----------
    filename : str
        Name of the FK table file to load (relative to parent_dir).
    parent_dir : str
        Path to the directory containing the FK table file and relative location
        of the x_alpha file.

    Returns
    -------
    Tuple[torch.Tensor, torch.Tensor]
        - x_alpha: Tensor of shape (N, 1) containing x_alpha grid nodes.
        - fk_table: Tensor containing the FK table data.
    """
    file_path = os.path.join(parent_dir, filename)
    fk_table = np.loadtxt(file_path, delimiter=None)

    file_path = os.path.join(parent_dir, "../../../Data/gridnodes/x_alpha.dat")
    x_alpha = np.loadtxt(file_path, delimiter=None)

    x_alpha = torch.tensor(x_alpha, dtype=torch.float32).view(-1, 1)
    fk_table = torch.tensor(fk_table, dtype=torch.float32)

    return x_alpha, fk_table

help_read_files.py


๐Ÿ“Š Post-Fit Analysis

postfit_criteria.py

Postfit

Source code in NN_fit/postfit_criteria.py
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
class Postfit:
    def __init__(self) -> None:
        pass

    # def compute_arc_length(self, model):
    #     npoints = 199  # 200 intervals
    #     seg_min = [1e-6, 1e-4, 1e-2]
    #     seg_max = [1e-4, 1e-2, 1.0]
    #     res = 0
    #     for a, b in zip(seg_min, seg_max):
    #         eps = (b - a) / npoints
    #         ixgrid = np.linspace(a, b, npoints, endpoint=False)
    #         ixgrid = torch.tensor(ixgrid, dtype=torch.float32).view(-1, 1)

    #         pdf_vals_grid = model(ixgrid)
    #         pdf_vals_grid = pdf_vals_grid.detach().numpy().flatten()
    #         ixgrid = ixgrid.detach().numpy().flatten()

    #         fdiff = np.diff(pdf_vals_grid) / eps
    #         res += integrate.simpson(np.sqrt(1 + np.square(fdiff)), x=ixgrid[1:])
    #     return res

    def apply_postfit_criteria(
        self,
        chi_squares: List[float],
        N_event_pred: np.ndarray,
        neutrino_pdfs: np.ndarray,
        pred: np.ndarray,
    ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
        """
        Applies post-fit criteria to filter out replicas with chi-squared values
        that deviate significantly from the mean (more than 4 standard deviations).

        Parameters
        ----------
        chi_squares : List[float]
            List of chi-squared values for each replica.
        N_event_pred : np.ndarray
            Array of predicted event yields for each replica.
        neutrino_pdfs : np.ndarray
            Array of predicted neutrino PDFs for each replica.
        pred : np.ndarray
            Array of original pseudo-data predictions for each replica.

        Returns
        -------
        Tuple[np.ndarray, np.ndarray, np.ndarray]
            Filtered arrays of neutrino_pdfs, N_event_pred, and pred with outlier replicas removed.

        Notes
        -----
        Replicas whose chi-squared differ from the mean by more than 4 standard deviations
        are considered outliers and removed.
        """
        sig_chi_squares = np.std(chi_squares)
        mean_chi_squares = np.mean(chi_squares)

        indices_to_remove = []
        for i in range(len(chi_squares)):
            diff_chi_squares = abs(chi_squares[i] - mean_chi_squares)
            if diff_chi_squares > 4 * sig_chi_squares:
                indices_to_remove.append(i)

        if len(indices_to_remove) > 0:
            indices_to_remove = np.array(indices_to_remove)
            N_event_pred = np.delete(N_event_pred, indices_to_remove)
            neutrino_pdfs = np.delete(neutrino_pdfs, indices_to_remove)
            pred = np.delete(pred, indices_to_remove)

        return neutrino_pdfs, N_event_pred, pred

apply_postfit_criteria(chi_squares, N_event_pred, neutrino_pdfs, pred)

Applies post-fit criteria to filter out replicas with chi-squared values that deviate significantly from the mean (more than 4 standard deviations).

Parameters

chi_squares : List[float] List of chi-squared values for each replica. N_event_pred : np.ndarray Array of predicted event yields for each replica. neutrino_pdfs : np.ndarray Array of predicted neutrino PDFs for each replica. pred : np.ndarray Array of original pseudo-data predictions for each replica.

Returns

Tuple[np.ndarray, np.ndarray, np.ndarray] Filtered arrays of neutrino_pdfs, N_event_pred, and pred with outlier replicas removed.

Notes

Replicas whose chi-squared differ from the mean by more than 4 standard deviations are considered outliers and removed.

Source code in NN_fit/postfit_criteria.py
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
def apply_postfit_criteria(
    self,
    chi_squares: List[float],
    N_event_pred: np.ndarray,
    neutrino_pdfs: np.ndarray,
    pred: np.ndarray,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
    """
    Applies post-fit criteria to filter out replicas with chi-squared values
    that deviate significantly from the mean (more than 4 standard deviations).

    Parameters
    ----------
    chi_squares : List[float]
        List of chi-squared values for each replica.
    N_event_pred : np.ndarray
        Array of predicted event yields for each replica.
    neutrino_pdfs : np.ndarray
        Array of predicted neutrino PDFs for each replica.
    pred : np.ndarray
        Array of original pseudo-data predictions for each replica.

    Returns
    -------
    Tuple[np.ndarray, np.ndarray, np.ndarray]
        Filtered arrays of neutrino_pdfs, N_event_pred, and pred with outlier replicas removed.

    Notes
    -----
    Replicas whose chi-squared differ from the mean by more than 4 standard deviations
    are considered outliers and removed.
    """
    sig_chi_squares = np.std(chi_squares)
    mean_chi_squares = np.mean(chi_squares)

    indices_to_remove = []
    for i in range(len(chi_squares)):
        diff_chi_squares = abs(chi_squares[i] - mean_chi_squares)
        if diff_chi_squares > 4 * sig_chi_squares:
            indices_to_remove.append(i)

    if len(indices_to_remove) > 0:
        indices_to_remove = np.array(indices_to_remove)
        N_event_pred = np.delete(N_event_pred, indices_to_remove)
        neutrino_pdfs = np.delete(neutrino_pdfs, indices_to_remove)
        pred = np.delete(pred, indices_to_remove)

    return neutrino_pdfs, N_event_pred, pred

postfit_measures.py

Measures

Source code in NN_fit/postfit_measures.py
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
class Measures:
    def __init__(
        self,
        cov_matrix: torch.Tensor,
        pdf: np.ndarray,
        N_event_pred: np.ndarray,
    ) -> None:
        self.cov_matrix = cov_matrix
        self.pdf = pdf
        self.N_event_pred = N_event_pred
        """
        Initialize Measures class with covariance matrix, PDF, and predicted events.

        Parameters
        ----------
        cov_matrix : torch.Tensor
            Covariance matrix used in chi-squared calculations.
        pdf : np.ndarray
            Reference PDF array.
        N_event_pred : np.ndarray
            Predicted event yields array, shape (num_replicas, num_bins).
        """

    def compute_delta_chi(
        self,
        level0: Union[np.ndarray, torch.Tensor],
        N_event_pred: np.ndarray,
        data_level1: torch.Tensor,
        x_vals: Union[np.ndarray, torch.Tensor],
    ) -> torch.Tensor:
        """
        Compute the relative change in chi-squared between a baseline theory prediction
        and the fit prediction.

        Parameters
        ----------
        level0 : Union[np.ndarray, torch.Tensor]
            Baseline theory prediction.
        N_event_pred : np.ndarray
            Predicted events for all replicas.
        data_level1 : torch.Tensor
            Observed data level 1 tensor, shape (num_bins, 1).
        x_vals : Union[np.ndarray, torch.Tensor]
            x-values (unused in this function).

        Returns
        -------
        torch.Tensor
            Relative change in chi-squared (delta chi).
        """
        level0 = torch.tensor(level0, dtype=torch.float32)
        data_level1 = data_level1.view(-1, 1)
        diff = level0 - data_level1

        diffcov = torch.matmul(self.cov_matrix, diff)
        chi_theory = torch.dot(diff.view(-1), diffcov.view(-1))

        mean_N_events = np.mean(N_event_pred, axis=0)
        mean_N_events = torch.tensor(mean_N_events, dtype=torch.float32)
        diff = mean_N_events - data_level1[0]

        diffcov = torch.matmul(self.cov_matrix, diff)
        chi_fit = torch.dot(diff.view(-1), diffcov.view(-1))

        delta_chi = (chi_fit - chi_theory) / chi_theory

        return delta_chi

    def compute_phi(
        self,
        data: Union[np.ndarray, torch.Tensor],
        chi_squares: List[float],
    ) -> float:
        """
        Compute the phi metric as the difference between average chi-square over replicas
        and chi-square of the mean prediction.

        Parameters
        ----------
        data : Union[np.ndarray, torch.Tensor]
            Observed data points.
        chi_squares : List[float]
            List of chi-square losses per replica.

        Returns
        -------
        float
            The phi metric.
        """
        num_reps = self.N_event_pred.shape[0]
        data = torch.tensor(data, dtype=torch.float32)
        chis = []

        for i in range(num_reps):
            N_event_rep = torch.tensor(self.N_event_pred[i], dtype=torch.float32)

            diff = N_event_rep - data
            diffcov = torch.matmul(self.cov_matrix, diff)
            loss = torch.dot(diff.view(-1), diffcov.view(-1))
            chis.append(loss)
        mean_N_event_fits = np.mean(self.N_event_pred, axis=0)

        # data = torch.tensor(data, dtype=torch.float32)
        mean_N_event_fits = torch.tensor(mean_N_event_fits, dtype=torch.float32)

        diff = mean_N_event_fits - data
        diffcov = torch.matmul(self.cov_matrix, diff)
        mean = torch.dot(diff.view(-1), diffcov.view(-1))

        phi_chi = np.mean(chis) - mean
        return phi_chi

    def compute_accuracy(
        self,
        x_alphas: np.ndarray,
        neutrino_pdf: np.ndarray,
        pdf: str,
        n: float,
        pdf_set: str,
        pid: int,
    ) -> float:
        """
        Compute the accuracy metric as the fraction of bins where the predicted neutrino PDF
        agrees with the reference PDF within n standard deviations.

        Parameters
        ----------
        x_alphas : np.ndarray
            Input x-values (not used directly in this function).
        neutrino_pdf : np.ndarray
            Array of predicted neutrino PDFs (shape: replicas x bins).
        pdf : str
            Path or identifier for the reference PDF file.
        n : float
            Number of standard deviations for the acceptance criterion.
        pdf_set : str
            Identifier of the PDF set.
        pid : int
            Particle ID for PDF retrieval.

        Returns
        -------
        float
            Fraction of bins where predicted PDF agrees within n std deviations.
        """
        mean_neutrino_pdf = np.mean(neutrino_pdf, axis=0)
        std_pdfs = np.std(neutrino_pdf, axis=0)
        distances = []

        arr = np.logspace(-5, 0, 1000)

        log_vals = np.logspace(np.log10(0.02), np.log10(0.8), 100)
        lin_vals = np.linspace(0.1, 0.8, 10)
        log_indices = np.searchsorted(arr, log_vals)
        lin_indices = np.searchsorted(arr, lin_vals)
        indices = np.concatenate((log_indices, lin_indices))

        faser_pdf, x_faser = read_pdf(pdf, arr, pid, pdf_set)
        faser_pdf = faser_pdf.flatten() * 1.16186e-09
        faser_pdf = faser_pdf[indices]
        mean_neutrino_pdf = mean_neutrino_pdf[indices]

        std_pdfs = std_pdfs[indices]

        for i in range(len(indices)):
            if abs(mean_neutrino_pdf[i] - faser_pdf[i]) < n * std_pdfs[i]:
                distances.append(1)

        distance = len(distances) / len(indices)

        return distance

    def compute_bias_to_variance(
        self,
        level0: Union[np.ndarray, torch.Tensor],
        level2: np.ndarray,
        N_event_pred: np.ndarray,
        REPLICAS: int,
    ) -> torch.Tensor:
        """
        Compute the ratio of bias to variance in the PDF fits.

        Parameters
        ----------
        level0 : Union[np.ndarray, torch.Tensor]
            Baseline theory prediction.
        level2 : np.ndarray
            Array of predictions at level 2, shape (REPLICAS, num_bins).
        N_event_pred : np.ndarray
            Predicted events for all replicas.
        REPLICAS : int
            Number of replicas.

        Returns
        -------
        torch.Tensor
            Ratio of bias to variance.
        """
        mean_N_events = np.mean(N_event_pred, axis=0)
        mean_N_events = torch.tensor(mean_N_events, dtype=torch.float32).view(-1)

        level0 = torch.tensor(level0, dtype=torch.float32).view(-1)

        diff = mean_N_events - level0
        diffcov = torch.matmul(self.cov_matrix, diff)
        bias = torch.dot(diff.view(-1), diffcov.view(-1))

        chi_square_level2 = 0

        for j in range(REPLICAS):
            diff = mean_N_events - level2[j]
            diff = diff.float()
            diffcov = torch.matmul(self.cov_matrix, diff)
            chi_square = torch.dot(diff.view(-1), diffcov.view(-1))

            chi_square_level2 += chi_square

        variance = chi_square_level2 / REPLICAS

        ratio = bias / variance
        return ratio

N_event_pred = N_event_pred instance-attribute

Initialize Measures class with covariance matrix, PDF, and predicted events.

Parameters

cov_matrix : torch.Tensor Covariance matrix used in chi-squared calculations. pdf : np.ndarray Reference PDF array. N_event_pred : np.ndarray Predicted event yields array, shape (num_replicas, num_bins).

compute_accuracy(x_alphas, neutrino_pdf, pdf, n, pdf_set, pid)

Compute the accuracy metric as the fraction of bins where the predicted neutrino PDF agrees with the reference PDF within n standard deviations.

Parameters

x_alphas : np.ndarray Input x-values (not used directly in this function). neutrino_pdf : np.ndarray Array of predicted neutrino PDFs (shape: replicas x bins). pdf : str Path or identifier for the reference PDF file. n : float Number of standard deviations for the acceptance criterion. pdf_set : str Identifier of the PDF set. pid : int Particle ID for PDF retrieval.

Returns

float Fraction of bins where predicted PDF agrees within n std deviations.

Source code in NN_fit/postfit_measures.py
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
def compute_accuracy(
    self,
    x_alphas: np.ndarray,
    neutrino_pdf: np.ndarray,
    pdf: str,
    n: float,
    pdf_set: str,
    pid: int,
) -> float:
    """
    Compute the accuracy metric as the fraction of bins where the predicted neutrino PDF
    agrees with the reference PDF within n standard deviations.

    Parameters
    ----------
    x_alphas : np.ndarray
        Input x-values (not used directly in this function).
    neutrino_pdf : np.ndarray
        Array of predicted neutrino PDFs (shape: replicas x bins).
    pdf : str
        Path or identifier for the reference PDF file.
    n : float
        Number of standard deviations for the acceptance criterion.
    pdf_set : str
        Identifier of the PDF set.
    pid : int
        Particle ID for PDF retrieval.

    Returns
    -------
    float
        Fraction of bins where predicted PDF agrees within n std deviations.
    """
    mean_neutrino_pdf = np.mean(neutrino_pdf, axis=0)
    std_pdfs = np.std(neutrino_pdf, axis=0)
    distances = []

    arr = np.logspace(-5, 0, 1000)

    log_vals = np.logspace(np.log10(0.02), np.log10(0.8), 100)
    lin_vals = np.linspace(0.1, 0.8, 10)
    log_indices = np.searchsorted(arr, log_vals)
    lin_indices = np.searchsorted(arr, lin_vals)
    indices = np.concatenate((log_indices, lin_indices))

    faser_pdf, x_faser = read_pdf(pdf, arr, pid, pdf_set)
    faser_pdf = faser_pdf.flatten() * 1.16186e-09
    faser_pdf = faser_pdf[indices]
    mean_neutrino_pdf = mean_neutrino_pdf[indices]

    std_pdfs = std_pdfs[indices]

    for i in range(len(indices)):
        if abs(mean_neutrino_pdf[i] - faser_pdf[i]) < n * std_pdfs[i]:
            distances.append(1)

    distance = len(distances) / len(indices)

    return distance

compute_bias_to_variance(level0, level2, N_event_pred, REPLICAS)

Compute the ratio of bias to variance in the PDF fits.

Parameters

level0 : Union[np.ndarray, torch.Tensor] Baseline theory prediction. level2 : np.ndarray Array of predictions at level 2, shape (REPLICAS, num_bins). N_event_pred : np.ndarray Predicted events for all replicas. REPLICAS : int Number of replicas.

Returns

torch.Tensor Ratio of bias to variance.

Source code in NN_fit/postfit_measures.py
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def compute_bias_to_variance(
    self,
    level0: Union[np.ndarray, torch.Tensor],
    level2: np.ndarray,
    N_event_pred: np.ndarray,
    REPLICAS: int,
) -> torch.Tensor:
    """
    Compute the ratio of bias to variance in the PDF fits.

    Parameters
    ----------
    level0 : Union[np.ndarray, torch.Tensor]
        Baseline theory prediction.
    level2 : np.ndarray
        Array of predictions at level 2, shape (REPLICAS, num_bins).
    N_event_pred : np.ndarray
        Predicted events for all replicas.
    REPLICAS : int
        Number of replicas.

    Returns
    -------
    torch.Tensor
        Ratio of bias to variance.
    """
    mean_N_events = np.mean(N_event_pred, axis=0)
    mean_N_events = torch.tensor(mean_N_events, dtype=torch.float32).view(-1)

    level0 = torch.tensor(level0, dtype=torch.float32).view(-1)

    diff = mean_N_events - level0
    diffcov = torch.matmul(self.cov_matrix, diff)
    bias = torch.dot(diff.view(-1), diffcov.view(-1))

    chi_square_level2 = 0

    for j in range(REPLICAS):
        diff = mean_N_events - level2[j]
        diff = diff.float()
        diffcov = torch.matmul(self.cov_matrix, diff)
        chi_square = torch.dot(diff.view(-1), diffcov.view(-1))

        chi_square_level2 += chi_square

    variance = chi_square_level2 / REPLICAS

    ratio = bias / variance
    return ratio

compute_delta_chi(level0, N_event_pred, data_level1, x_vals)

Compute the relative change in chi-squared between a baseline theory prediction and the fit prediction.

Parameters

level0 : Union[np.ndarray, torch.Tensor] Baseline theory prediction. N_event_pred : np.ndarray Predicted events for all replicas. data_level1 : torch.Tensor Observed data level 1 tensor, shape (num_bins, 1). x_vals : Union[np.ndarray, torch.Tensor] x-values (unused in this function).

Returns

torch.Tensor Relative change in chi-squared (delta chi).

Source code in NN_fit/postfit_measures.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
def compute_delta_chi(
    self,
    level0: Union[np.ndarray, torch.Tensor],
    N_event_pred: np.ndarray,
    data_level1: torch.Tensor,
    x_vals: Union[np.ndarray, torch.Tensor],
) -> torch.Tensor:
    """
    Compute the relative change in chi-squared between a baseline theory prediction
    and the fit prediction.

    Parameters
    ----------
    level0 : Union[np.ndarray, torch.Tensor]
        Baseline theory prediction.
    N_event_pred : np.ndarray
        Predicted events for all replicas.
    data_level1 : torch.Tensor
        Observed data level 1 tensor, shape (num_bins, 1).
    x_vals : Union[np.ndarray, torch.Tensor]
        x-values (unused in this function).

    Returns
    -------
    torch.Tensor
        Relative change in chi-squared (delta chi).
    """
    level0 = torch.tensor(level0, dtype=torch.float32)
    data_level1 = data_level1.view(-1, 1)
    diff = level0 - data_level1

    diffcov = torch.matmul(self.cov_matrix, diff)
    chi_theory = torch.dot(diff.view(-1), diffcov.view(-1))

    mean_N_events = np.mean(N_event_pred, axis=0)
    mean_N_events = torch.tensor(mean_N_events, dtype=torch.float32)
    diff = mean_N_events - data_level1[0]

    diffcov = torch.matmul(self.cov_matrix, diff)
    chi_fit = torch.dot(diff.view(-1), diffcov.view(-1))

    delta_chi = (chi_fit - chi_theory) / chi_theory

    return delta_chi

compute_phi(data, chi_squares)

Compute the phi metric as the difference between average chi-square over replicas and chi-square of the mean prediction.

Parameters

data : Union[np.ndarray, torch.Tensor] Observed data points. chi_squares : List[float] List of chi-square losses per replica.

Returns

float The phi metric.

Source code in NN_fit/postfit_measures.py
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
def compute_phi(
    self,
    data: Union[np.ndarray, torch.Tensor],
    chi_squares: List[float],
) -> float:
    """
    Compute the phi metric as the difference between average chi-square over replicas
    and chi-square of the mean prediction.

    Parameters
    ----------
    data : Union[np.ndarray, torch.Tensor]
        Observed data points.
    chi_squares : List[float]
        List of chi-square losses per replica.

    Returns
    -------
    float
        The phi metric.
    """
    num_reps = self.N_event_pred.shape[0]
    data = torch.tensor(data, dtype=torch.float32)
    chis = []

    for i in range(num_reps):
        N_event_rep = torch.tensor(self.N_event_pred[i], dtype=torch.float32)

        diff = N_event_rep - data
        diffcov = torch.matmul(self.cov_matrix, diff)
        loss = torch.dot(diff.view(-1), diffcov.view(-1))
        chis.append(loss)
    mean_N_event_fits = np.mean(self.N_event_pred, axis=0)

    # data = torch.tensor(data, dtype=torch.float32)
    mean_N_event_fits = torch.tensor(mean_N_event_fits, dtype=torch.float32)

    diff = mean_N_event_fits - data
    diffcov = torch.matmul(self.cov_matrix, diff)
    mean = torch.dot(diff.view(-1), diffcov.view(-1))

    phi_chi = np.mean(chis) - mean
    return phi_chi

pull.py

compute_pull(mean_pdf1, mean_pdf2, error_pdf1, error_pdf2)

Computes pull between pdfs

Parameters:

Name Type Description Default
mean_pdf1 np array

neutrino pdf 1

required
mean_pdf2 np array

neutrino pdf 2

required
error_pdf1 np array

std neutrino pdf 1

required
error_pdf2 np array

std neutrino pdf 2

required

Returns:

Name Type Description
list

pull

Source code in NN_fit/pull.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
def compute_pull(mean_pdf1, mean_pdf2, error_pdf1, error_pdf2):
    """Computes pull between pdfs

    Args:
        mean_pdf1 (np array): neutrino pdf 1
        mean_pdf2 (np array): neutrino pdf 2
        error_pdf1 (np array): std neutrino pdf 1
        error_pdf2 (np array): std neutrino pdf 2

    Returns:
        list: pull
    """
    pull = np.abs(mean_pdf1 - mean_pdf2) / np.sqrt(error_pdf1**2 - error_pdf2**2)
    return pull

๐Ÿงฌ Miscellaneous Modules

form_loss_fct.py

complete_loss_fct_comb(pred, data, cov_matrix, f, int_point_nu, x_int, lag_mult_pos, lag_mult_int)

Extended chi-squared loss for combined neutrino + antineutrino prediction, with constraints.

Parameters

pred : torch.Tensor Model predictions (shape: N). data : torch.Tensor Observed pseudo-data (shape: N). cov_matrix : torch.Tensor Covariance matrix for the data (shape: N x N). f : torch.Tensor Raw NN output without preprocessing (shape: N). int_point_nu : torch.Tensor Integral constraint vector (shape: N). x_int : torch.Tensor x-values for integral constraint (shape: N). lag_mult_pos : float Lagrange multiplier for positivity constraint. lag_mult_int : float Lagrange multiplier for integral normalization.

Returns

torch.Tensor Total loss.

Source code in NN_fit/form_loss_fct.py
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
def complete_loss_fct_comb(
    pred: torch.Tensor,
    data: torch.Tensor,
    cov_matrix: torch.Tensor,
    f: torch.Tensor,
    int_point_nu: torch.Tensor,
    x_int: torch.Tensor,
    lag_mult_pos: float,
    lag_mult_int: float,
) -> torch.Tensor:
    """
    Extended chi-squared loss for combined neutrino + antineutrino prediction, with constraints.

    Parameters
    ----------
    pred : torch.Tensor
        Model predictions (shape: N).
    data : torch.Tensor
        Observed pseudo-data (shape: N).
    cov_matrix : torch.Tensor
        Covariance matrix for the data (shape: N x N).
    f : torch.Tensor
        Raw NN output without preprocessing (shape: N).
    int_point_nu : torch.Tensor
        Integral constraint vector (shape: N).
    x_int : torch.Tensor
        x-values for integral constraint (shape: N).
    lag_mult_pos : float
        Lagrange multiplier for positivity constraint.
    lag_mult_int : float
        Lagrange multiplier for integral normalization.

    Returns
    -------
    torch.Tensor
        Total loss.
    """
    diff = pred - data
    diffcov = torch.matmul(cov_matrix, diff)

    f = torch.where(f > 0, 0, f)

    loss = (
        (1 / pred.size(0)) * torch.dot(diff.view(-1), diffcov.view(-1))
        + abs(torch.sum(f)) * lag_mult_pos
        + abs(torch.sum(int_point_nu * x_int)) * lag_mult_int
    )

    return loss

complete_loss_fct_nu_nub(pred, data, cov_matrix, f, int_point_nu, int_point_nub, x_int, lag_mult_pos, lag_mult_int)

Extended chi-squared loss for separate neutrino and antineutrino predictions, with constraints.

Parameters

pred : torch.Tensor Model predictions for observed data (shape: N). data : torch.Tensor Observed pseudo-data (shape: N). cov_matrix : torch.Tensor Covariance matrix for the data (shape: N x N). f : torch.Tensor Raw (non-preprocessed) neural network outputs (shape: N x 2), with: - f[:, 0]: neutrino component (ฮฝ) - f[:, 1]: antineutrino component (ฮฝฬ„) int_point_nu : torch.Tensor Integral constraint values for ฮฝ (shape: N). int_point_nub : torch.Tensor Integral constraint values for ฮฝฬ„ (shape: N). x_int : torch.Tensor x-values for integral evaluation (shape: N). lag_mult_pos : float Lagrange multiplier for enforcing positivity. lag_mult_int : float Lagrange multiplier for enforcing normalization via integral.

Returns

torch.Tensor Total loss including chi-squared term, positivity penalty, and integral constraint.

Source code in NN_fit/form_loss_fct.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
def complete_loss_fct_nu_nub(
    pred: torch.Tensor,
    data: torch.Tensor,
    cov_matrix: torch.Tensor,
    f: torch.Tensor,
    int_point_nu: torch.Tensor,
    int_point_nub: torch.Tensor,
    x_int: torch.Tensor,
    lag_mult_pos: float,
    lag_mult_int: float,
) -> torch.Tensor:
    """
    Extended chi-squared loss for separate neutrino and antineutrino predictions, with constraints.

    Parameters
    ----------
    pred : torch.Tensor
        Model predictions for observed data (shape: N).
    data : torch.Tensor
        Observed pseudo-data (shape: N).
    cov_matrix : torch.Tensor
        Covariance matrix for the data (shape: N x N).
    f : torch.Tensor
        Raw (non-preprocessed) neural network outputs (shape: N x 2), with:
            - f[:, 0]: neutrino component (ฮฝ)
            - f[:, 1]: antineutrino component (ฮฝฬ„)
    int_point_nu : torch.Tensor
        Integral constraint values for ฮฝ (shape: N).
    int_point_nub : torch.Tensor
        Integral constraint values for ฮฝฬ„ (shape: N).
    x_int : torch.Tensor
        x-values for integral evaluation (shape: N).
    lag_mult_pos : float
        Lagrange multiplier for enforcing positivity.
    lag_mult_int : float
        Lagrange multiplier for enforcing normalization via integral.

    Returns
    -------
    torch.Tensor
        Total loss including chi-squared term, positivity penalty, and integral constraint.
    """
    diff = pred - data
    diffcov = torch.matmul(cov_matrix, diff)

    fnu = f[:, 0]
    fnub = f[:, 1]
    fnu = torch.where(fnu > 0, fnu, torch.tensor(0.0))
    fnub = torch.where(fnub > 0, fnub, torch.tensor(0.0))

    loss = (
        (1 / pred.size(0)) * torch.dot(diff.view(-1), diffcov.view(-1))
        + abs(torch.sum(fnu)) * lag_mult_pos
        + abs(torch.sum(fnub)) * lag_mult_pos
        + abs(torch.sum(int_point_nu * x_int)) * lag_mult_int
        + abs(torch.sum(int_point_nub * x_int)) * lag_mult_int
    )

    return loss

raw_loss_fct(pred, data, cov_matrix)

Standard chi-squared loss without any constraints.

Parameters

pred : torch.Tensor Model predictions (shape: N). data : torch.Tensor Observed pseudo-data (shape: N). cov_matrix : torch.Tensor Covariance matrix (shape: N x N).

Returns

torch.Tensor Chi-squared loss.

Source code in NN_fit/form_loss_fct.py
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
def raw_loss_fct(
    pred: torch.Tensor, data: torch.Tensor, cov_matrix: torch.Tensor
) -> torch.Tensor:
    """
    Standard chi-squared loss without any constraints.

    Parameters
    ----------
    pred : torch.Tensor
        Model predictions (shape: N).
    data : torch.Tensor
        Observed pseudo-data (shape: N).
    cov_matrix : torch.Tensor
        Covariance matrix (shape: N x N).

    Returns
    -------
    torch.Tensor
        Chi-squared loss.
    """
    diff = pred - data
    diffcov = torch.matmul(cov_matrix, diff)

    loss = (1 / pred.size(0)) * torch.dot(diff.view(-1), diffcov.view(-1))

    return loss

generate_data.py

aggregate_entries_with_indices(fk_tables, data, binwidths, low_bin, high_bin, threshold)

Aggregates FK table rows and data bins until a minimum threshold of events is reached.

This function rebins cross section data, corresponding FK table rows, and bin widths such that each new bin has at least a specified number of events (threshold). This is often necessary for achieving meaningful statistical analysis in low-statistics bins.

Parameters

fk_tables : torch.Tensor The FastKernel table with shape (n_bins, n_x). data : np.ndarray Event data per bin (e.g., cross section bin width). binwidths : np.ndarray Width of each original bin. low_bin : np.ndarray Lower edges of the original bins. high_bin : np.ndarray Upper edges of the original bins. threshold : float Minimum number of events required to form a new rebinned bin.

Returns

rebin_data : list of float Aggregated event counts after rebinning. rebin_fk_table_mu : torch.Tensor Rebinned FK table rows (shape: new_n_bins n_x). rebin_binwidhts : np.ndarray Bin widths corresponding to rebinned bins. rebin_low_bin : np.ndarray Lower edges of the rebinned bins. rebin_high_bin : np.ndarray Upper edges of the rebinned bins.

Notes

  • Remaining data after the last full threshold bin is added to the final bin.
  • Bin widths are recomputed using weighted averages to ensure consistency.
Source code in NN_fit/generate_data.py
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
def aggregate_entries_with_indices(
    fk_tables: torch.Tensor,
    data: np.ndarray,
    binwidths: np.ndarray,
    low_bin: np.ndarray,
    high_bin: np.ndarray,
    threshold: float,
) -> Tuple[
    List[float],  # rebin_data
    torch.Tensor,  # rebin_fk_table_mu
    np.ndarray,  # rebin_binwidhts
    np.ndarray,  # rebin_low_bin
    np.ndarray,  # rebin_high_bin
]:
    """
    Aggregates FK table rows and data bins until a minimum threshold of events is reached.

    This function rebins cross section data, corresponding FK table rows, and bin widths
    such that each new bin has at least a specified number of events (threshold). This
    is often necessary for achieving meaningful statistical analysis in low-statistics bins.

    Parameters
    ----------
    fk_tables : torch.Tensor
        The FastKernel table with shape (n_bins, n_x).
    data : np.ndarray
        Event data per bin (e.g., cross section  bin width).
    binwidths : np.ndarray
        Width of each original bin.
    low_bin : np.ndarray
        Lower edges of the original bins.
    high_bin : np.ndarray
        Upper edges of the original bins.
    threshold : float
        Minimum number of events required to form a new rebinned bin.

    Returns
    -------
    rebin_data : list of float
        Aggregated event counts after rebinning.
    rebin_fk_table_mu : torch.Tensor
        Rebinned FK table rows (shape: new_n_bins  n_x).
    rebin_binwidhts : np.ndarray
        Bin widths corresponding to rebinned bins.
    rebin_low_bin : np.ndarray
        Lower edges of the rebinned bins.
    rebin_high_bin : np.ndarray
        Upper edges of the rebinned bins.

    Notes
    -----
    - Remaining data after the last full threshold bin is added to the final bin.
    - Bin widths are recomputed using weighted averages to ensure consistency.
    """
    (
        rebin_data,
        rebin_fk_table_mu,
        rebin_binwidhts,
        rebin_low_bin,
        rebin_high_bin,
    ) = (
        [],
        [],
        [],
        [],
        [],
    )
    current_sum = 0
    start_idx = 0
    raw_data = data / binwidths

    for i, value in enumerate(data):
        current_sum += value

        if current_sum >= threshold:
            rebin_data.append(sum(data[start_idx : i + 1]))
            sum_binwidth = 0.0

            sum_binwidth = np.sum(data[start_idx : i + 1]) / np.sum(
                raw_data[start_idx : i + 1]
            )

            rebin_binwidhts.append(sum_binwidth)

            rebin_low_bin.append(low_bin[i])
            rebin_high_bin.append(high_bin[i])
            summed_column_mu = torch.sum(fk_tables[start_idx : i + 1, :], axis=0)
            summed_column_mu = summed_column_mu.unsqueeze(0)
            rebin_fk_table_mu.append(summed_column_mu)

            previous_idx = start_idx
            start_idx = i + 1
            current_sum = 0

    if current_sum >= 0:
        rebin_data[-1] += sum(data[start_idx:])

        sum_binwidth = 0.0

        sum_binwidth = np.sum(data[previous_idx:]) / np.sum(raw_data[previous_idx:])

        rebin_binwidhts[-1] = sum_binwidth

        rebin_fk_table_mu[-1] += torch.sum(fk_tables[start_idx:, :], axis=0).unsqueeze(
            0
        )
        rebin_low_bin[-1] = low_bin[previous_idx]
        rebin_high_bin[-1] = high_bin[-1]

    rebin_fk_table_mu = torch.cat(rebin_fk_table_mu, dim=0)

    data = np.array(data)
    rebin_binwidhts = np.array(rebin_binwidhts)
    rebin_low_bin = np.array(rebin_low_bin)

    return (
        rebin_data,
        rebin_fk_table_mu,
        rebin_binwidhts,
        rebin_low_bin,
        rebin_high_bin,
    )

compute_pseudo_data(filename_fk_mub_n, filename_fk_mub_p, filename_fk_mu_n, filename_fk_mu_p, filename_binsize, pid, pdf_name, pdf_set)

Computes pseudo-data for neutrino and anti-neutrino scattering using FK tables and PDFs.

This function processes FK tables (FastKernel weight tables) and convolves them with parton distribution functions (PDFs) to produce synthetic ("pseudo") data for both neutrinos (mu) and anti-neutrinos (mub). It also calculates associated statistical errors.

Parameters

filename_fk_mub_n : str Filename for the anti-neutrino FK table (neutron target). filename_fk_mub_p : str Filename for the anti-neutrino FK table (proton target). filename_fk_mu_n : str Filename for the neutrino FK table (neutron target). filename_fk_mu_p : str Filename for the neutrino FK table (proton target). filename_binsize : str Filename containing binning information (low, high bin edges, and widths). pid : int PDG ID of the relevant parton (e.g., 12, 14, 16 for neutrino flavors). pdf_name : str Name of the PDF set (e.g., "CT18", "NNPDF4.0"). pdf_set : int Index of the replica or member within the PDF set.

Returns

data_mu : np.ndarray Pseudo-data for neutrino cross sections, binned. data_mub : np.ndarray Pseudo-data for anti-neutrino cross sections, binned. error_mu : np.ndarray Statistical uncertainty (sqrt(N)) for neutrino data. error_mub : np.ndarray Statistical uncertainty (sqrt(N)) for anti-neutrino data. fk_tables_mu : torch.Tensor Final combined FK table for neutrinos. fk_tables_mub : torch.Tensor Final combined FK table for anti-neutrinos. low_bin : np.ndarray Lower bin edges of the energy bins. high_bin : np.ndarray Upper bin edges of the energy bins. binwidths_mu : np.ndarray Widths of bins used for neutrino integration. binwidths_mub : np.ndarray Widths of bins used for anti-neutrino integration (same as binwidths_mu).

Notes

  • This function uses hard-coded weights: 59.56% neutron and 40.44% proton contributions.
  • Any negative or zero values in the pseudo-data are replaced with small positive values (0.1) to avoid numerical issues.
  • Requires FK tables and bin sizes to be precomputed and available as text files.
Source code in NN_fit/generate_data.py
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def compute_pseudo_data(
    filename_fk_mub_n: str,
    filename_fk_mub_p: str,
    filename_fk_mu_n: str,
    filename_fk_mu_p: str,
    filename_binsize: str,
    pid: int,
    pdf_name: str,
    pdf_set: int,
) -> Tuple[
    np.ndarray,  # data_mu
    np.ndarray,  # data_mub
    np.ndarray,  # error_mu
    np.ndarray,  # error_mub
    torch.Tensor,  # fk_tables_mu
    torch.Tensor,  # fk_tables_mub
    np.ndarray,  # low_bin
    np.ndarray,  # high_bin
    np.ndarray,  # binwidths_mu
    np.ndarray,  # binwidths_mub
]:
    """
    Computes pseudo-data for neutrino and anti-neutrino scattering using FK tables and PDFs.

    This function processes FK tables (FastKernel weight tables) and convolves them with
    parton distribution functions (PDFs) to produce synthetic ("pseudo") data for both
    neutrinos (mu) and anti-neutrinos (mub). It also calculates associated statistical errors.

    Parameters
    ----------
    filename_fk_mub_n : str
        Filename for the anti-neutrino FK table (neutron target).
    filename_fk_mub_p : str
        Filename for the anti-neutrino FK table (proton target).
    filename_fk_mu_n : str
        Filename for the neutrino FK table (neutron target).
    filename_fk_mu_p : str
        Filename for the neutrino FK table (proton target).
    filename_binsize : str
        Filename containing binning information (low, high bin edges, and widths).
    pid : int
        PDG ID of the relevant parton (e.g., 12, 14, 16 for neutrino flavors).
    pdf_name : str
        Name of the PDF set (e.g., "CT18", "NNPDF4.0").
    pdf_set : int
        Index of the replica or member within the PDF set.

    Returns
    -------
    data_mu : np.ndarray
        Pseudo-data for neutrino cross sections, binned.
    data_mub : np.ndarray
        Pseudo-data for anti-neutrino cross sections, binned.
    error_mu : np.ndarray
        Statistical uncertainty (sqrt(N)) for neutrino data.
    error_mub : np.ndarray
        Statistical uncertainty (sqrt(N)) for anti-neutrino data.
    fk_tables_mu : torch.Tensor
        Final combined FK table for neutrinos.
    fk_tables_mub : torch.Tensor
        Final combined FK table for anti-neutrinos.
    low_bin : np.ndarray
        Lower bin edges of the energy bins.
    high_bin : np.ndarray
        Upper bin edges of the energy bins.
    binwidths_mu : np.ndarray
        Widths of bins used for neutrino integration.
    binwidths_mub : np.ndarray
        Widths of bins used for anti-neutrino integration (same as `binwidths_mu`).

    Notes
    -----
    - This function uses hard-coded weights: 59.56% neutron and 40.44% proton contributions.
    - Any negative or zero values in the pseudo-data are replaced with small positive values
      (0.1) to avoid numerical issues.
    - Requires FK tables and bin sizes to be precomputed and available as text files.
    """
    x_alphas, fk_tables_mu_n = get_fk_table(
        filename=filename_fk_mu_n, parent_dir=parent_dir
    )

    x_alphas, fk_tables_mu_p = get_fk_table(
        filename=filename_fk_mu_p, parent_dir=parent_dir
    )

    fk_tables_mu = fk_tables_mu_n * 0.5956284 + fk_tables_mu_p * 0.4043716

    x_alphas, fk_tables_mub_n = get_fk_table(
        filename=filename_fk_mub_n, parent_dir=parent_dir
    )

    x_alphas, fk_tables_mub_p = get_fk_table(
        filename=filename_fk_mub_p, parent_dir=parent_dir
    )
    fk_tables_mub = fk_tables_mub_n * 0.5956284 + fk_tables_mub_p * 0.4043716

    file_path = os.path.join(parent_dir, filename_binsize)
    low_bin, high_bin, binwidths_mu = np.loadtxt(f"{file_path}", unpack=True)
    binwidths_mub = binwidths_mu

    faser_pdf, x = read_pdf(pdf_name, x_alphas.detach().numpy().flatten(), pid, pdf_set)
    faser_pdf = torch.tensor(faser_pdf, dtype=torch.float32)
    data_mu = (
        (torch.matmul(fk_tables_mu, faser_pdf) * binwidths_mu)
        .detach()
        .numpy()
        .flatten()
    )

    faser_pdf, x = read_pdf(
        pdf_name, x_alphas.detach().numpy().flatten(), -pid, pdf_set
    )
    faser_pdf = torch.tensor(faser_pdf, dtype=torch.float32)
    data_mub = (
        (torch.matmul(fk_tables_mub, faser_pdf) * binwidths_mub)
        .detach()
        .numpy()
        .flatten()
    )
    data_mu = np.array(data_mu)
    data_mub = np.array(data_mub)
    data_mu = np.where(data_mu < 0, 0, data_mu)
    data_mu = np.where(data_mu < 0, 0, data_mu)
    data_mu = np.where(data_mu == 0, 0.1, data_mu)
    data_mub = np.where(data_mub == 0, 0.1, data_mub)

    error_mu = np.sqrt(data_mu)
    error_mub = np.sqrt(data_mub)

    return (
        data_mu,
        data_mub,
        error_mu,
        error_mub,
        fk_tables_mu,
        fk_tables_mub,
        low_bin,
        high_bin,
        binwidths_mu,
        binwidths_mub,
    )

write_data(filename_fk_mub_n, filename_fk_mub_p, filename_fk_mu_n, filename_fk_mu_p, filename_binsize, pid, pdf_name, pdf_set, filename_to_store_events, filename_to_store_stat_error, filename_to_store_sys_error, filename_to_store_cov_matrix, min_num_events, observable, combine_nu_nub_data, multiplication_factor_sys_error)

Computes pseudo-data from FK tables and PDFs, optionally rebins them, and writes the results to disk.

This function is the main pipeline to produce and store pseudo-experimental data, its statistical and systematic uncertainties, covariance matrix, binning information, and FastKernel tables, all ready for use in PDF fits or phenomenology studies.

Parameters

filename_fk_mub_n : str FK table for anti-neutrino interactions on neutrons. filename_fk_mub_p : str FK table for anti-neutrino interactions on protons. filename_fk_mu_n : str FK table for neutrino interactions on neutrons. filename_fk_mu_p : str FK table for neutrino interactions on protons. filename_binsize : str Filename for bin edges and widths (low, high, width). pid : int PDG ID of the target parton species. pdf_name : str Name of the LHAPDF set to use. pdf_set : int Index of the PDF replica or member. filename_to_store_events : str Base filename for storing rebinned event data. filename_to_store_stat_error : str Base filename for storing statistical uncertainties. filename_to_store_sys_error : str Base filename for storing systematic uncertainties. filename_to_store_cov_matrix : str Base filename for storing the inverse of the covariance matrix. min_num_events : int Minimum number of events per bin in the rebinned dataset. observable : str Observable label (e.g., "energy", "pt") used in output filenames. combine_nu_nub_data : bool If True, neutrino and anti-neutrino data are summed into one dataset. multiplication_factor_sys_error : float Factor to multiply event counts for estimating systematic uncertainties.

Returns

None Writes output files directly to disk.

Output Files

../../../Data/data/: - Re-binned event counts (combined, mu, mub)

../../../Data/uncertainties/: - Statistical and systematic uncertainties - Covariance matrix (inverted, diagonal only)

../../../Data/binning/: - Re-binned bin edges and widths (mu, mub, or combined)

../../../Data/fastkernel/: - Re-binned FK tables

Notes

  • Assumes FK and bin files are already precomputed and exist in the expected format.
  • The covariance matrix is stored in inverted form (1/ฯƒยฒ on the diagonal).
  • Output filenames are automatically labeled with PID and threshold settings.
  • Handles both the case where ฮฝ and ฮฝฬ„ data are stored separately or combined.
Source code in NN_fit/generate_data.py
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
def write_data(
    filename_fk_mub_n: str,
    filename_fk_mub_p: str,
    filename_fk_mu_n: str,
    filename_fk_mu_p: str,
    filename_binsize: str,
    pid: int,
    pdf_name: str,
    pdf_set: int,
    filename_to_store_events: str,
    filename_to_store_stat_error: str,
    filename_to_store_sys_error: str,
    filename_to_store_cov_matrix: str,
    min_num_events: int,
    observable: str,
    combine_nu_nub_data: bool,
    multiplication_factor_sys_error: float,
) -> None:
    """
    Computes pseudo-data from FK tables and PDFs, optionally rebins them, and writes the results to disk.

    This function is the main pipeline to produce and store pseudo-experimental data, its
    statistical and systematic uncertainties, covariance matrix, binning information, and
    FastKernel tables, all ready for use in PDF fits or phenomenology studies.

    Parameters
    ----------
    filename_fk_mub_n : str
        FK table for anti-neutrino interactions on neutrons.
    filename_fk_mub_p : str
        FK table for anti-neutrino interactions on protons.
    filename_fk_mu_n : str
        FK table for neutrino interactions on neutrons.
    filename_fk_mu_p : str
        FK table for neutrino interactions on protons.
    filename_binsize : str
        Filename for bin edges and widths (low, high, width).
    pid : int
        PDG ID of the target parton species.
    pdf_name : str
        Name of the LHAPDF set to use.
    pdf_set : int
        Index of the PDF replica or member.
    filename_to_store_events : str
        Base filename for storing rebinned event data.
    filename_to_store_stat_error : str
        Base filename for storing statistical uncertainties.
    filename_to_store_sys_error : str
        Base filename for storing systematic uncertainties.
    filename_to_store_cov_matrix : str
        Base filename for storing the inverse of the covariance matrix.
    min_num_events : int
        Minimum number of events per bin in the rebinned dataset.
    observable : str
        Observable label (e.g., "energy", "pt") used in output filenames.
    combine_nu_nub_data : bool
        If True, neutrino and anti-neutrino data are summed into one dataset.
    multiplication_factor_sys_error : float
        Factor to multiply event counts for estimating systematic uncertainties.

    Returns
    -------
    None
        Writes output files directly to disk.

    Output Files
    ------------
    ../../../Data/data/:
        - Re-binned event counts (combined, mu, mub)

    ../../../Data/uncertainties/:
        - Statistical and systematic uncertainties
        - Covariance matrix (inverted, diagonal only)

    ../../../Data/binning/:
        - Re-binned bin edges and widths (mu, mub, or combined)

    ../../../Data/fastkernel/:
        - Re-binned FK tables

    Notes
    -----
    - Assumes FK and bin files are already precomputed and exist in the expected format.
    - The covariance matrix is stored in inverted form (1/ฯƒยฒ on the diagonal).
    - Output filenames are automatically labeled with PID and threshold settings.
    - Handles both the case where ฮฝ and ฮฝฬ„ data are stored separately or combined.
    """
    (
        data_mu,
        data_mub,
        error_mu,
        error_mub,
        fk_tables_mu,
        fk_tables_mub,
        low_bin,
        high_bin,
        binwidths_mu,
        binwidths_mub,
    ) = compute_pseudo_data(
        filename_fk_mub_n,
        filename_fk_mub_p,
        filename_fk_mu_n,
        filename_fk_mu_p,
        filename_binsize,
        pid,
        pdf_name,
        pdf_set,
    )

    if combine_nu_nub_data:
        data = data_mu + data_mub
        binwidths = binwidths_mu
        fk_tables = fk_tables_mu + fk_tables_mub
        (
            data,
            fk_tables,
            binwidths,
            low_bin,
            high_bin,
        ) = aggregate_entries_with_indices(
            fk_tables, data, binwidths, low_bin, high_bin, min_num_events
        )
        stack_binning = np.column_stack((low_bin, high_bin, binwidths))

        error_stat = np.sqrt(data)
        error_sys = np.array(data) * multiplication_factor_sys_error

        np.savetxt(
            f"../../../Data/uncertainties/{filename_to_store_stat_error}_comb_min_{min_num_events}_events_{pid}",
            error_stat,
        )

        np.savetxt(
            f"../../../Data/data/{filename_to_store_events}_comb_min_{min_num_events}_events_{pid}",
            data,
        )
        cov_matrix = np.diag(error_sys**2 + error_stat**2)
        cov_matrix = np.linalg.inv(cov_matrix)
        np.savetxt(
            f"../../../Data/uncertainties/{filename_to_store_cov_matrix}_comb_min_{min_num_events}_events_{pid}",
            cov_matrix,
        )
        np.savetxt(
            f"../../../Data/uncertainties/{filename_to_store_sys_error}_comb_min_{min_num_events}_events_{pid}",
            error_sys,
        )
        np.savetxt(
            f"../../../Data/binning/FK_{observable}_binsize_nu_min_{min_num_events}_events_{pid}",
            stack_binning,
        )
        np.savetxt(
            f"../../../Data/binning/FK_{observable}_binsize_nub_min_{min_num_events}_events_{pid}",
            stack_binning,
        )
        np.savetxt(
            f"../../../Data/fastkernel/FK_{observable}_comb_min_{min_num_events}_events_{pid}",
            fk_tables,
        )

    else:
        (
            data_mu,
            fk_tables_mu,
            binwidths_mu,
            low_bin_mu,
            high_bin_mu,
        ) = aggregate_entries_with_indices(
            fk_tables_mu, data_mu, binwidths_mu, low_bin, high_bin, min_num_events
        )

        (
            data_mub,
            fk_tables_mub,
            binwidths_mub,
            low_bin_mub,
            high_bin_mub,
        ) = aggregate_entries_with_indices(
            fk_tables_mub, data_mub, binwidths_mub, low_bin, high_bin, min_num_events
        )
        error_stat_nu = np.sqrt(data_mu)
        error_stat_nub = np.sqrt(data_mub)
        error_sys_nu = np.array(data_mu) * multiplication_factor_sys_error
        error_sys_nub = np.array(data_mub) * multiplication_factor_sys_error
        stacked_data = np.hstack((data_mu, data_mub))
        error_tot_nu = error_stat_nu**2 + error_sys_nu**2
        error_tot_nub = error_stat_nub**2 + error_sys_nub**2
        stacked_error = np.hstack((error_tot_nu, error_tot_nub))
        error_stat_tot = np.hstack((error_stat_nu, error_stat_nub))
        error_sys_tot = np.hstack((error_sys_nu, error_sys_nub))
        np.savetxt(
            f"../../../Data/data/{filename_to_store_events}_nu_min_{min_num_events}_events_{pid}",
            data_mu,
        )
        np.savetxt(
            f"../../../Data/data/{filename_to_store_events}_nub_min_{min_num_events}_events_{-pid}",
            data_mub,
        )
        np.savetxt(
            f"../../../Data/data/{filename_to_store_events}_comb_min_{min_num_events}_events_{pid}",
            stacked_data,
        )
        np.savetxt(
            f"../../../Data/uncertainties/{filename_to_store_stat_error}_comb_min_{min_num_events}_events_{pid}",
            error_stat_tot,
        )
        np.savetxt(
            f"../../../Data/uncertainties/{filename_to_store_sys_error}_comb_min_{min_num_events}_events_{pid}",
            error_sys_tot,
        )

        cov_matrix = np.diag(stacked_error)
        cov_matrix = np.linalg.inv(cov_matrix)
        np.savetxt(
            f"../../../Data/uncertainties/{filename_to_store_cov_matrix}_comb_min_{min_num_events}_events_{pid}",
            cov_matrix,
        )

        np.savetxt(
            f"../../../Data/fastkernel/FK_{observable}_mu_min_{min_num_events}_events_{pid}",
            fk_tables_mu,
        )
        np.savetxt(
            f"../../../Data/fastkernel/FK_{observable}_mub_min_{min_num_events}_events_{-pid}",
            fk_tables_mub,
        )

        stack_binning_mu = np.column_stack((low_bin_mu, high_bin_mu, binwidths_mu))
        np.savetxt(
            f"../../../Data/binning/FK_{observable}_binsize_mu_min_{min_num_events}_events_{pid}",
            stack_binning_mu,
        )

        stack_binning_mub = np.column_stack((low_bin_mub, high_bin_mub, binwidths_mub))
        np.savetxt(
            f"../../../Data/binning/FK_{observable}_binsize_mub_min_{min_num_events}_events_{-pid}",
            stack_binning_mub,
        )

    print("The data has been written to the Data directory")

data.yaml, fit_settings.yaml, aall_fits/, runcards/, etc.

These files are config/data files and are not included here directly. You can describe them in a separate "Configuration Guide" if needed.


๐Ÿ“ All modules are automatically documented from Python docstrings using mkdocstrings. Type hints, function signatures, and class hierarchies are included where available.