我在 PyTorch 中有一个p
形状的张量(B, 3, N)
:
# 2 batches, 3 channels (x, y, z), 5 points
p = torch.rand(2, 3, 5, requires_grad=True)
"""
p: tensor([[[0.8365, 0.0505, 0.4208, 0.7465, 0.6843],
[0.9922, 0.2684, 0.6898, 0.3983, 0.4227],
[0.3188, 0.2471, 0.9552, 0.5181, 0.6877]],
[[0.1079, 0.7694, 0.2194, 0.7801, 0.8043],
[0.8554, 0.3505, 0.4622, 0.0339, 0.7909],
[0.5806, 0.7593, 0.0193, 0.5191, 0.1589]]], requires_grad=True)
"""
然后是另一个z_shift
形状[B, 1]
:
z_shift = torch.tensor([[1.0], [10.0]], requires_grad=True)
"""
z_shift: tensor([[1.],
[10.]], requires_grad=True)
"""
我想对每个批次中的所有点应用适当的 z 移位,保持 x 和 y 不变:
"""
p: tensor([[[0.8365, 0.0505, 0.4208, 0.7465, 0.6843],
[0.9922, 0.2684, 0.6898, 0.3983, 0.4227],
[1.3188, 1.2471, 1.9552, 1.5181, 1.6877]],
[[0.1079, 0.7694, 0.2194, 0.7801, 0.8043],
[0.8554, 0.3505, 0.4622, 0.0339, 0.7909],
[10.5806, 10.7593, 10.0193, 10.5191, 10.1589]]])
"""
我设法这样做:
p[:, 2, :] += z_shift
对于的情况requires_grad=False
,但这在forward
我的nn.Module
(我认为相当于requires_grad=True
)内部失败了:
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.