这是我对内部培训流程如何运作的理解nn.MultiheadAttention
。让我们忽略位置编码,只关注Q
大小写。
batch = 1,num_heads = 2, seq_len = 5, problem_dim = 4.
word_embedding = [5,4]
q_weight = [4x4]
Q = word_embedding*q_weight
认为,
class MultiHeadAttentionModel(nn.Module):
def __init__(self, problem_dim, num_heads):
super().__init__()
self.multihead_attn = nn.MultiheadAttention(embed_dim=problem_dim,num_heads=num_heads,batch_first=True)
def forward(self, query, key, value):
attn_output, attn_output_weights = self.multihead_attn(query, key, value)
return attn_output, attn_output_weights
model = MultiHeadAttentionModel(problem_dim=problem_dim, num_heads=num_heads)
model.eval() <---------------- forward pass
attn_output, attn_output_weights = model(Q, K, V)
attn_output.backward() <--------------- training (backward pass)
final_linear_weight = model.multihead_attn.out_proj.weight
现在有最终的线性变换忽略缩放output = (softmax(Q.dot(K_trans).dot(V))*final_linear_weight
我的问题是,final_linear_weight
训练阶段学习的唯一权重是吗?