pgd
pgd_attack(model, data, target, x_L, x_U, restarts=1, step_size=0.2, n_steps=200, early_stopping=True, device='cuda', decay_factor=0.1, decay_checkpoints=())
Performs the Projected Gradient Descent (PGD) attack on the given model and data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
The neural network model to attack. |
required |
data
|
Tensor
|
The input data to perturb. |
required |
target
|
Tensor
|
The target labels for the input data. |
required |
x_L
|
Tensor
|
The lower bound of the input data. |
required |
x_U
|
Tensor
|
The upper bound of the input data. |
required |
restarts
|
int
|
The number of random restarts. Default is 1. |
1
|
step_size
|
float
|
The step size for each gradient update. Default is 0.2. |
0.2
|
n_steps
|
int
|
The number of steps for the attack. Default is 200. |
200
|
early_stopping
|
bool
|
Whether to stop early if adversarial examples are found. Default is True. |
True
|
device
|
str
|
The device to perform the attack on. Default is 'cuda'. |
'cuda'
|
decay_factor
|
float
|
The factor by which to decay the step size at each checkpoint. Default is 0.1. |
0.1
|
decay_checkpoints
|
tuple
|
The checkpoints at which to decay the step size. Default is (). |
()
|
Returns:
Type | Description |
---|---|
torch.Tensor: The generated adversarial examples. |
Source code in CTRAIN/attacks/pgd.py
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
|