taps
GradExpander
Bases: Function
A custom autograd function that scales the gradient during the backward pass.
This function allows you to define a custom forward and backward pass for a
PyTorch operation. The forward pass simply returns the input tensor, while
the backward pass scales the gradient by a specified factor alpha
.
Methods:
forward(ctx, x, alpha: float = 1):
backward(ctx, grad_x):
Source code in CTRAIN/bound/taps.py
294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 |
|
backward(ctx, grad_x)
staticmethod
Performs the backward pass for the custom autograd function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
The context object that can be used to stash information for backward computation. |
required | |
grad_x
|
The gradient of the loss with respect to the output of the forward pass. |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, None]
|
A tuple containing the gradient of the loss with respect to the input of the forward pass and None (as there is no gradient with respect to the second input). |
Source code in CTRAIN/bound/taps.py
323 324 325 326 327 328 329 330 331 332 333 334 335 |
|
forward(ctx, x, alpha=1)
staticmethod
Forward pass for the custom operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
The context object that can be used to stash information for backward computation. |
required | |
x
|
The input tensor. |
required | |
alpha
|
float
|
A scaling factor. Defaults to 1. |
1
|
Returns:
Type | Description |
---|---|
Tensor
|
The input tensor |
Source code in CTRAIN/bound/taps.py
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 |
|
RectifiedLinearGradientLink
Bases: Function
RectifiedLinearGradientLink is a custom autograd function that establishes a rectified linear gradient link between the IBP bounds of the feature extractor (lb, ub) and the PGD bounds (x_adv) of the classifier. This function is not a valid gradient with respect to the forward function.
Attributes:
Name | Type | Description |
---|---|---|
c |
float
|
A constant used to determine the slope. |
tol |
float
|
A tolerance value to avoid division by zero. |
Methods:
Name | Description |
---|---|
forward |
float, tol: float) |
backward |
|
Source code in CTRAIN/bound/taps.py
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 |
|
backward(ctx, grad_x)
staticmethod
Computes the gradient of the loss with respect to the input bounds (lb, ub).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
Context object containing saved tensors and constants. |
required | |
grad_x
|
Gradient of the loss with respect to the output of the forward function. |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, Tensor, None, None, None]
|
Gradients with respect to lb, ub, and None for other inputs. |
Source code in CTRAIN/bound/taps.py
268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 |
|
forward(ctx, lb, ub, x, c, tol)
staticmethod
Saves the input tensors and constants for backward computation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
Context object to save information for backward computation. |
required | |
lb
|
Lower bound tensor. |
required | |
ub
|
Upper bound tensor. |
required | |
x
|
Input tensor. |
required | |
c
|
float
|
A constant used to determine the slope. |
required |
tol
|
float
|
A tolerance value to avoid division by zero. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The input tensor x. |
Source code in CTRAIN/bound/taps.py
248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
|
_get_bound_estimation_from_pts(block, pts, dim_to_estimate, C=None)
Estimate bounds for specified dimensions from given adversarial examples.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
block
|
BoundedModule
|
The neural network block for which to estimate pivotal points. |
required |
pts
|
Tensor
|
Tensor of adversarial examples of shape (batch_size, num_pivotal, *shape_in[1:]). |
required |
dim_to_estimate
|
Tensor
|
Tensor indicating the dimensions to estimate, shape (batch_size, num_dims, dim_size). |
required |
C
|
Tensor
|
Matrix specifying the correct class for bound margin calculation. Must be provided. |
None
|
Returns:
Name | Type | Description |
---|---|---|
estimated_bounds |
Tensor
|
Estimated bounds tensor of shape (batch_size, num_pivotal) if C is None, otherwise shape (batch_size, n_class). |
Source code in CTRAIN/bound/taps.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
|
_get_pivotal_points(block, input_lb, input_ub, pgd_steps, pgd_restarts, pgd_step_size, pgd_decay_factor, pgd_decay_checkpoints, n_classes, C=None)
Estimate pivotal points for the classifier block using Projected Gradient Descent (PGD).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
block
|
BoundedModule
|
The neural network block for which to estimate pivotal points. |
required |
input_lb
|
Tensor
|
Lower bound of the input to the network block. |
required |
input_ub
|
Tensor
|
Upper bound of the input to the network block. |
required |
pgd_steps
|
int
|
Number of PGD steps to perform. |
required |
pgd_restarts
|
int
|
Number of PGD restarts to perform. |
required |
pgd_step_size
|
float
|
Step size for PGD. |
required |
pgd_decay_factor
|
float
|
Decay factor for PGD step size. |
required |
pgd_decay_checkpoints
|
list of int
|
Checkpoints at which to decay the PGD step size. |
required |
n_classes
|
int
|
Number of classes in the classification task. |
required |
C
|
Tensor
|
Matrix specifying the correct class for bound margin calculation. Must be provided. |
None
|
Returns:
Type | Description |
---|---|
list of torch.Tensor
|
List containing the concatenated pivotal points tensor. |
Source code in CTRAIN/bound/taps.py
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
|
_get_pivotal_points_one_batch(block, lb, ub, pgd_steps, pgd_restarts, pgd_step_size, pgd_decay_factor, pgd_decay_checkpoints, C, n_classes, device='cuda')
Estimate pivotal points for a batch using Projected Gradient Descent (PGD).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
block
|
BoundedModule
|
The neural network block for which to estimate pivotal points. |
required |
lb
|
Tensor
|
Lower bound of the input. |
required |
ub
|
Tensor
|
Upper bound of the input. |
required |
pgd_steps
|
int
|
Number of PGD steps. |
required |
pgd_restarts
|
int
|
Number of PGD restarts. |
required |
pgd_step_size
|
float
|
Step size for PGD. |
required |
pgd_decay_factor
|
float
|
Decay factor for learning rate. |
required |
pgd_decay_checkpoints
|
list
|
Checkpoints for learning rate decay. |
required |
C
|
Tensor
|
Matrix specifying the correct class for bound margin calculation. Must be provided. |
required |
n_classes
|
int
|
Number of classes. |
required |
device
|
str
|
Device to perform computations on. Default is 'cuda'. |
'cuda'
|
Returns:
Type | Description |
---|---|
Tensor
|
Adversarial examples per class for whole batch. |
Source code in CTRAIN/bound/taps.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
bound_taps(original_model, hardened_model, bounded_blocks, data, target, n_classes, ptb, device='cuda', pgd_steps=20, pgd_restarts=1, pgd_step_size=0.2, pgd_decay_factor=0.2, pgd_decay_checkpoints=(5, 7), gradient_link_thresh=0.5, gradient_link_tolerance=1e-05, propagation='IBP', sabr_args=None)
Compute the bounds of the model's output using the TAPS method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
original_model
|
Module
|
The original neural network model. |
required |
hardened_model
|
BoundedModule
|
The auto_LiRPA model. |
required |
bounded_blocks
|
list
|
List of bounded blocks of the model. |
required |
data
|
Tensor
|
The input data tensor. |
required |
target
|
Tensor
|
The target labels tensor. |
required |
n_classes
|
int
|
The number of classes for classification. |
required |
ptb
|
PerturbationLpNorm
|
The perturbation object defining the perturbation set. |
required |
device
|
str
|
The device to run the computation on. Default is 'cuda'. |
'cuda'
|
pgd_steps
|
int
|
The number of steps for the PGD attack. Default is 20. |
20
|
pgd_restarts
|
int
|
The number of restarts for the PGD attack. Default is 1. |
1
|
pgd_step_size
|
float
|
The step size for the PGD attack. Default is 0.2. |
0.2
|
pgd_decay_factor
|
float
|
The decay factor for the PGD attack. Default is 0.2. |
0.2
|
pgd_decay_checkpoints
|
tuple
|
The decay checkpoints for the PGD attack. Default is (5, 7). |
(5, 7)
|
gradient_link_thresh
|
float
|
The threshold for gradient linking. Default is 0.5. |
0.5
|
gradient_link_tolerance
|
float
|
The tolerance for gradient linking. Default is 1e-05. |
1e-05
|
propagation
|
str
|
The propagation method to use ('IBP' or 'SABR'). Default is 'IBP'. |
'IBP'
|
sabr_args
|
dict
|
The arguments for the SABR method. Default is None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
taps_bound |
Tuple[Tensor, Tensor]
|
The TAPS bounds of the model's output. |
Source code in CTRAIN/bound/taps.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|