I am trying to train RNN model to classify sentences into 4 classes, but it doesn’t seem to work. I tried to overfit 4 examples (blue line) which worked, but even as little as 8 examples (red line) is not working, let alone the whole dataset.
我try 了不同的学习速度和大小的hidden_size
和embedding_size
,但似乎没有帮助,我错过了什么?我知道,如果模型不能过度适应小批量,这意味着应该增加产能,但在这种情况下,增加产能没有效果.
class RNN(nn.Module):
def __init__(self, embedding_size=256, hidden_size=128, num_classes=4):
super().__init__()
self.embedding = nn.Embedding(len(vocab), embedding_size, 0)
self.rnn = nn.RNN(embedding_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
#x=[batch_size, sequence_length]
x = self.embedding(x) #x=[batch_size, sequence_length, embedding_size]
_, h_n = self.rnn(x) #h_n=[1, batch_size, hidden_size]
h_n = h_n.squeeze(0)
out = self.fc(h_n) #out=[batch_size, num_classes]
return out
输入数据是标记化的句子,用0填充到该批中最长的句子,因此作为一个示例,一个样本是:[278495441321120,0,0].数据来自torchtext数据集的AG_新闻数据集.
培训代码:
model = RNN().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=LR)
model.train()
for epoch in range(NUM_EPOCHS):
epoch_losses = []
correct_predictions = []
for batch_idx, (labels, texts) in enumerate(train_loader):
scores = model(texts)
loss = criterion(scores, labels)
loss.backward()
optimizer.step()
optimizer.zero_grad()
epoch_losses.append(loss.item())
correct = (scores.max(1).indices==labels).sum()
correct_predictions.append(correct)
epoch_avg_loss = sum(epoch_losses)/len(epoch_losses)
epoch_avg_accuracy = float(sum(correct_predictions))/float(len(labels))