Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue Running CW Attack with PyTorch #1239

Open
christymarc opened this issue Oct 26, 2023 · 0 comments
Open

Issue Running CW Attack with PyTorch #1239

christymarc opened this issue Oct 26, 2023 · 0 comments

Comments

@christymarc
Copy link

christymarc commented Oct 26, 2023

I am working with Pytorch 1.13.1 and Python 3.10.12.

When using the Cleverhans CW attack for Pytorch, the attack script runs into 3 errors.

  1. On line 108 of the attack py file:
const = x.new_ones(len(x), 1) * initial_const

The following error comes up:

TypeError: new_ones(): argument 'dtype' must be torch.dtype, not int

To solve this, I assumed the 1 was supposed to denote the dimension of the tensor, rather than a dtype, so I wrapped the function call in another set of parenthesis:

const = x.new_ones((len(x), 1)) * initial_const
  1. On line 134:
optimizer = torch.optim.Adam([modifier], lr=lr)

I get the following error:

ValueError: can’t optimize a non-leaf Tensor

This is due to line 123:

modifier = torch.zeros_like(x, requires_grad=True)

Which returns a non-leaf Tensor. I was thinking at some point maybe a version update changed this function to not return a leaf tensor, but regardless, I fixed it using Pytorch's zeros function, since the documentation of zeros_like described these functions as equivalent if you give them different parameters accordingly. The edited line looks like this:

modifier = torch.zeros(x.size(), requires_grad=True, dtype=x.dtype, layout=x.layout, device=x.device)
  1. On line 155:
loss.backward()

I received this error:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

To solve this, I just specified retain_graph=True:

loss.backward(retain_graph=True)

But I'm not sure it this is the most efficient fix...

While the solutions I implemented make the attack script run and seemingly work, I am not sure if maybe I am missing something or if I have unknowingly changed the code's functionality in some way. So I would really appreciate any feedback and/or guidance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant