我正在try 理解使用tqdm
的进度条到底是如何工作的.我有一些代码,如下所示:
import torch
import torchvision
print(f"torch version: {torch.__version__}")
print(f"torchvision version: {torchvision.__version__}")
load_data()
manual_transforms = transforms.Compose([])
train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders()
# them within the main function I have placed the train function that exists in the `engine.py` file
def main():
results = engine.train(model=model,
train_dataloader=train_dataloader,
test_dataloader=test_dataloader,
optimizer=optimizer,
loss_fn=loss_fn,
epochs=5,
device=device)
并且engine.train()
函数包括以下代码for epoch in tqdm(range(epochs)):
.然后,对每一批进行训练,以可视化训练的进度.每次 for each 步骤运行tqdm时,它还会打印以下语句:
print(f"torch version: {torch.__version__}")
print(f"torchvision version: {torchvision.__version__}")
最后,我的问题是为什么会发生这种情况.main函数如何访问这些全局语句,以及如何避免在每个循环中打印所有内容?