visdom 是一款灵活的可视化工具,由 meta 公司开发。与谷歌开发的 tensorboard(TensorFlow 可视化工具) 类似,visdom 是 pytorch 的可视化工具(相比于 tensorboardX 性能更好)。visdom 支持 torch、numpy 等。客户端与服务器端通过 tornado 进行非阻塞交互。

安装

visdom 的安装非常简单,只需要执行下面的命令即可

1
pip install visdom

启动 visdom

visdom 的启动可以使用如下两种方法

1
visdom
1
python -m visdom.server

启动后默认是运行在本机的 8097 端口,不过可以通过如下方法修改端口、增加用户名密码等

1
VISDOM_USERNAME=jinzhongxu VISDOM_PASSWORD=112233 VISDOM_USE_ENV_CREDENTIALS=1 /usr/local/miniconda/bin/visdom -base_url /visdom -enable_login -port 7908

注意,想要设置用户名和密码,首先需要在 .visdom目录下创建 COOKIE_SECRET 文件,并在里面写入一些随机值。然后运行上面的命令即可。上面的命令为 visdom 配置的用户名和密码,同时指定了端口和 base url,此时,访问的地址是:http://localhost:7908/visdom

更多参数,可以使用如下命令查看

1
visdom -h

visdom 的使用

1
2
3
4
import time

import numpy as np
import visdom

如果不指定环境名,默认创建 “main”,建议每次画图是自己指定环境名。

1
2
env = "Tutorial"
viz = visdom.Visdom(env=env)
Setting up a new session...
1
# viz.delete_env(env=env)

单个图像

1
2
3
img = viz.image(
np.random.randn(3, 256, 256), opts={"title": "image", "caption": "caption"}
)
1
2
3
for i in range(10):
viz.image(np.random.randn(3, 256, 256), win=img)
time.sleep(0.2)

多个图像

1
2
3
4
imgs = viz.images(
np.random.randn(20, 3, 64, 64),
opts=dict(title="images", caption="captions", nrow=5),
)
1
2
3
for i in range(10):
viz.images(np.random.randn(20, 3, 64, 64), win=imgs)
time.sleep(0.2)

图像批量显示

1
2
import torch
from torchvision import datasets, transforms
1
2
3
4
5
6
7
8
9
10
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"/home/jinzhongxu/Dataset/",
train=True,
download=True,
transform=transforms.Compose([transforms.ToTensor()]),
),
batch_size=128,
shuffle=True,
)
1
sample = next(iter(train_loader))
1
viz.images(sample[0], nrow=16, win="mnist", opts=dict(title="mnist", caption="minist"))

文本

1
text = viz.text("Hello World!")
1
2
strForOut = "This is a string for you to print!"
out = ""
1
2
3
4
for i in range(len(strForOut)):
out += strForOut[i]
viz.text(out, win=text)
time.sleep(0.2)

折线图

1
2
x = 0
name = ["acc", "loss", "loss2"]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
for i in range(50):
y = np.random.randint(5, size=(1, 3))
viz.line(
Y=y,
X=np.ones(y.shape) * x,
win="line",
opts=dict(
legend=name,
title="line",
width=300,
height=300,
xlabel="Time",
ylabel="Volume",
),
update=None if x == 0 else "append",
)
time.sleep(0.2)
x += 1

散点图

1
2
3
4
5
6
7
8
9
10
11
colors = np.random.randint(0, 255, (3, 3))
win = viz.scatter(
X=np.random.rand(255, 2),
Y=np.random.randint(1, 4, (255)),
opts=dict(
markersize=5,
markercolor=colors,
legend=["1", "2", "3"],
markersymbol="cross-thin-open",
),
)
1
2
3
4
5
6
7
8
9
10
11
colors = np.random.randint(0, 255, (3, 3))
win = viz.scatter(
X=np.random.rand(255, 3),
Y=np.random.randint(1, 4, (255)),
opts=dict(
markersize=5,
markercolor=colors,
lengend=["1", "2", "3"],
markersymbol="corss-thin-open",
),
)
1
2
3
4
5
6
7
8
9
legend = list("123")
Sca = viz.scatter(
X=np.array([[0, 0]]),
Y=np.array([1]),
opts=dict(
markersize=5,
legend=legend,
),
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
for i in range(20):
X = np.random.rand(1, 2)
Y = np.random.randint(1, 4, 1)
viz.scatter(
X=X,
Y=Y,
win=Sca,
update="append",
name=legend[Y[0] - 1],
opts=dict(
markersize=5,
),
)
time.sleep(0.5)

深度学习

1
2
3
4
5
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class MLP(nn.Module):
"""
构建简单的模型,简单线性层 + ReLU 函数的多层感知机
"""

def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.ReLU(inplace=True),
nn.Linear(200, 200),
nn.ReLU(inplace=True),
nn.Linear(200, 10),
nn.ReLU(inplace=True),
)

def forward(self, x):
x = self.model(x)
return x
1
2
3
batch_size = 128
learning_rate = 0.01
epochs = 50
1
2
3
4
5
6
7
8
9
10
11
12
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"/home/jinzhongxu/Dataset/",
train=True,
download=True,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=batch_size,
shuffle=True,
)
1
2
3
4
5
6
7
8
9
10
11
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"/home/jinzhongxu/Dataset/",
train=False,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=batch_size,
shuffle=True,
)
1
2
viz.line([0.0], [0.0], win="train loss", opts=dict(title="train_losses"))
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
1
net = MLP().to(device)
1
2
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
criterial = nn.CrossEntropyLoss().to(device)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28 * 28)
data, target = data.to(device), target.cuda()
logits = net(data)
loss = criterial(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print(
"Train Epoch: {} [{} / {} ({:.0f}%)] \t Loss: {:.6f}".format(
epoch,
batch_idx * len(data),
len(train_loader.dataset),
100.0 * batch_idx / len(train_loader),
loss.item(),
)
)
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28 * 28)
data, target = data.to(device), target.cuda()
logits = net(data)
test_loss += criterial(logits, target).item()
pred = logits.argmax(dim=1)
correct += pred.eq(target).float().sum().item()
test_loss /= len(test_loader.dataset)
viz.line([test_loss], [epoch], win="train loss", update="append")
print(
"\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n".format(
test_loss,
correct,
len(test_loader.dataset),
100.0 * correct / len(test_loader.dataset),
)
)

保存结果

默认 visdom 的结果是存在内存中的,想要长久保存,可以在页面上点击 Manage Environment,然后保存,默认保存在当前用户的 .visdom 目录下。

也可以通过程序保存,方法如下:

1
viz.save(env)

保存的结果以 json 形式存放。下次再次打开 visdom 可以在界面上选择环境查看。

更多关于 visdom 的介绍请参考官网GitHub 获取。

离线安装

离线安装需要将所有依赖包下载下来,以及 visdom 运行时需要的 css\js\fonts 等文件。

下载依赖包需要先知道哪些是 visdom 的依赖包以及依赖包的版本要求。可以采用如下方法查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ pipdeptree -p visdom
visdom==0.1.8.9
- jsonpatch [required: Any, installed: 1.32]
- jsonpointer [required: >=1.9, installed: 2.3]
- numpy [required: >=1.8, installed: 1.22.4]
- pillow [required: Any, installed: 9.2.0]
- pyzmq [required: Any, installed: 23.2.0]
- requests [required: Any, installed: 2.27.1]
- certifi [required: >=2017.4.17, installed: 2022.6.15]
- charset-normalizer [required: ~=2.0.0, installed: 2.0.4]
- idna [required: >=2.5,<4, installed: 3.3]
- urllib3 [required: >=1.21.1,<1.27, installed: 1.26.9]
- scipy [required: Any, installed: 1.8.1]
- numpy [required: >=1.17.3,<1.25.0, installed: 1.22.4]
- six [required: Any, installed: 1.16.0]
- torchfile [required: Any, installed: 0.1.0]
- tornado [required: Any, installed: 6.1]
- websocket-client [required: Any, installed: 1.3.3]

然后,下载包含 css\js\fonts 等文件文件夹 static,建议方案是,首先在能够上午的电脑上安装并运行一次 visdom,然后将运行时产生的 static 文件拷贝下来。默认 static 文件保存在 Python安装环境/lib/python3.x/site-packages/visdom 里面。最后,将拷贝的 static 文件夹放到离线电脑的 visdom 安装目录下面即可。

参考文献

  1. Visdom 使用教程
  2. 深度学习可视化工具visdom使用
  3. PyTorch 训练可视化教程 visdom
  4. pytorch visdom教程