计算机视觉中自注意力构建块的PyTorch实现
点击上方“程序员大白”,选择“星标”公众号
重磅干货,第一时间送达
作者:AI Summer
编译:ronghuaiyang
一个非常好用的git仓库,封装了非常全面的计算机视觉中的自注意力构建块,直接调用,无需重复造轮子了。
git仓库地址:https://github.com/The-AI-Summer/self-attention-cv
用einsum和einops在PyTorch中实现计算机视觉的自我注意机制。专注于计算机视觉自注意模块。
使用 pip 安装
$ pip install self-attention-cv
如果你没有GPU,最好是在环境中预装好pytorch。
相关的文章
How Attention works in Deep Learning How Transformers work in deep learning and NLP How the Vision Transformer (ViT) works in 10 minutes: an image is worth 16x16 words Understanding einsum for Deep learning: implement a transformer with multi-head self-attention from scratch How Positional Embeddings work in Self-Attention
示例代码
Multi-head attention
import torch
from self_attention_cv import MultiHeadSelfAttention
model = MultiHeadSelfAttention(dim=64)
x = torch.rand(16, 10, 64) # [batch, tokens, dim]
mask = torch.zeros(10, 10) # tokens X tokens
mask[5:8, 5:8] = 1
y = model(x, mask)
Axial attention
import torch
from self_attention_cv import AxialAttentionBlock
model = AxialAttentionBlock(in_channels=256, dim=64, heads=8)
x = torch.rand(1, 256, 64, 64) # [batch, tokens, dim, dim]
y = model(x)
Vanilla Transformer Encoder
import torch
from self_attention_cv import TransformerEncoder
model = TransformerEncoder(dim=64,blocks=6,heads=8)
x = torch.rand(16, 10, 64) # [batch, tokens, dim]
mask = torch.zeros(10, 10) # tokens X tokens
mask[5:8, 5:8] = 1
y = model(x,mask)
Vision Transformer使用ResNet50主干做图像分类
import torch
from self_attention_cv import ViT, ResNet50ViT
model1 = ResNet50ViT(img_dim=128, pretrained_resnet=False,
blocks=6, num_classes=10,
dim_linear_block=256, dim=256)
# or
model2 = ViT(img_dim=256, in_channels=3, patch_dim=16, num_classes=10,dim=512)
x = torch.rand(2, 3, 256, 256)
y = model2(x) # [2,10]
使用Vision Transformer编码器的Unet的复现
import torch
from self_attention_cv.transunet import TransUnet
a = torch.rand(2, 3, 128, 128)
model = TransUnet(in_channels=3, img_dim=128, vit_blocks=8,
vit_dim_linear_mhsa_block=512, classes=5)
y = model(a) # [2, 5, 128, 128]
Bottleneck Attention block
import torch
from self_attention_cv.bottleneck_transformer import BottleneckBlock
inp = torch.rand(1, 512, 32, 32)
bottleneck_block = BottleneckBlock(in_channels=512, fmap_size=(32, 32), heads=4, out_channels=1024, pooling=True)
y = bottleneck_block(inp)
位置嵌入可用
1D Positional Embeddings
import torch
from self_attention_cv.pos_embeddings import AbsPosEmb1D,RelPosEmb1D
model = AbsPosEmb1D(tokens=20, dim_head=64)
# batch heads tokens dim_head
q = torch.rand(2, 3, 20, 64)
y1 = model(q)
model = RelPosEmb1D(tokens=20, dim_head=64, heads=3)
q = torch.rand(2, 3, 20, 64)
y2 = model(q)
2D Positional Embeddings
import torch
from self_attention_cv.pos_embeddings import RelPosEmb2D
dim = 32 # spatial dim of the feat map
model = RelPosEmb2D(
feat_map_size=(dim, dim),
dim_head=128)
q = torch.rand(2, 4, dim*dim, 128)
y = model(q)
参考文献
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. arXiv preprint arXiv:1706.03762. Wang, H., Zhu, Y., Green, B., Adam, H., Yuille, A., & Chen, L. C. (2020, August). Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In European Conference on Computer Vision (pp. 108-126). Springer, Cham. Srinivas, A., Lin, T. Y., Parmar, N., Shlens, J., Abbeel, P., & Vaswani, A. (2021). Bottleneck Transformers for Visual Recognition. arXiv preprint arXiv:2101.11605. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
英文原文:https://github.com/The-AI-Summer/self-attention-cv
推荐阅读
关于程序员大白
程序员大白是一群哈工大,东北大学,西湖大学和上海交通大学的硕士博士运营维护的号,大家乐于分享高质量文章,喜欢总结知识,欢迎关注[程序员大白],大家一起学习进步!
评论