taps
GradExpander
Bases: Function
A custom autograd function that scales the gradient during the backward pass.
This function allows you to define a custom forward and backward pass for a
PyTorch operation. The forward pass simply returns the input tensor, while
the backward pass scales the gradient by a specified factor alpha
.
Methods:
forward(ctx, x, alpha: float = 1):
backward(ctx, grad_x):
Source code in CTRAIN/bound/taps.py
294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 |
|
backward(ctx, grad_x)
staticmethod
Performs the backward pass for the custom autograd function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
The context object that can be used to stash information for backward computation. |
required | |
grad_x
|
The gradient of the loss with respect to the output of the forward pass. |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, None]
|
A tuple containing the gradient of the loss with respect to the input of the forward pass and None (as there is no gradient with respect to the second input). |
Source code in CTRAIN/bound/taps.py
323 324 325 326 327 328 329 330 331 332 333 334 335 |
|
forward(ctx, x, alpha=1)
staticmethod
Forward pass for the custom operation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
The context object that can be used to stash information for backward computation. |
required | |
x
|
The input tensor. |
required | |
alpha
|
float
|
A scaling factor. Defaults to 1. |
1
|
Returns:
Type | Description |
---|---|
Tensor
|
The input tensor |
Source code in CTRAIN/bound/taps.py
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 |
|
RectifiedLinearGradientLink
Bases: Function
RectifiedLinearGradientLink is a custom autograd function that establishes a rectified linear gradient link between the IBP bounds of the feature extractor (lb, ub) and the PGD bounds (x_adv) of the classifier. This function is not a valid gradient with respect to the forward function.
Attributes:
Name | Type | Description |
---|---|---|
c |
float
|
A constant used to determine the slope. |
tol |
float
|
A tolerance value to avoid division by zero. |
Methods:
Name | Description |
---|---|
forward |
float, tol: float) |
backward |
|
Source code in CTRAIN/bound/taps.py
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 |
|
backward(ctx, grad_x)
staticmethod
Computes the gradient of the loss with respect to the input bounds (lb, ub).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
Context object containing saved tensors and constants. |
required | |
grad_x
|
Gradient of the loss with respect to the output of the forward function. |
required |
Returns:
Type | Description |
---|---|
Tuple[Tensor, Tensor, None, None, None]
|
Gradients with respect to lb, ub, and None for other inputs. |
Source code in CTRAIN/bound/taps.py
268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 |
|
forward(ctx, lb, ub, x, c, tol)
staticmethod
Saves the input tensors and constants for backward computation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ctx
|
Context object to save information for backward computation. |
required | |
lb
|
Lower bound tensor. |
required | |
ub
|
Upper bound tensor. |
required | |
x
|
Input tensor. |
required | |
c
|
float
|
A constant used to determine the slope. |
required |
tol
|
float
|
A tolerance value to avoid division by zero. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The input tensor x. |
Source code in CTRAIN/bound/taps.py
248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
|
bound_taps(original_model, hardened_model, bounded_blocks, data, target, n_classes, ptb, device='cuda', pgd_steps=20, pgd_restarts=1, pgd_step_size=0.2, pgd_decay_factor=0.2, pgd_decay_checkpoints=(5, 7), gradient_link_thresh=0.5, gradient_link_tolerance=1e-05, propagation='IBP', sabr_args=None)
Compute the bounds of the model's output using the TAPS method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
original_model
|
Module
|
The original neural network model. |
required |
hardened_model
|
BoundedModule
|
The auto_LiRPA model. |
required |
bounded_blocks
|
list
|
List of bounded blocks of the model. |
required |
data
|
Tensor
|
The input data tensor. |
required |
target
|
Tensor
|
The target labels tensor. |
required |
n_classes
|
int
|
The number of classes for classification. |
required |
ptb
|
PerturbationLpNorm
|
The perturbation object defining the perturbation set. |
required |
device
|
str
|
The device to run the computation on. Default is 'cuda'. |
'cuda'
|
pgd_steps
|
int
|
The number of steps for the PGD attack. Default is 20. |
20
|
pgd_restarts
|
int
|
The number of restarts for the PGD attack. Default is 1. |
1
|
pgd_step_size
|
float
|
The step size for the PGD attack. Default is 0.2. |
0.2
|
pgd_decay_factor
|
float
|
The decay factor for the PGD attack. Default is 0.2. |
0.2
|
pgd_decay_checkpoints
|
tuple
|
The decay checkpoints for the PGD attack. Default is (5, 7). |
(5, 7)
|
gradient_link_thresh
|
float
|
The threshold for gradient linking. Default is 0.5. |
0.5
|
gradient_link_tolerance
|
float
|
The tolerance for gradient linking. Default is 1e-05. |
1e-05
|
propagation
|
str
|
The propagation method to use ('IBP' or 'SABR'). Default is 'IBP'. |
'IBP'
|
sabr_args
|
dict
|
The arguments for the SABR method. Default is None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
taps_bound |
Tuple[Tensor, Tensor]
|
The TAPS bounds of the model's output. |
Source code in CTRAIN/bound/taps.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|