重磅开源!FAIR发布自监督训练库VISSL!

机器学习算法工程师

共 2643字,需浏览 6分钟

 ·

2021-02-02 21:59

点蓝色字关注“机器学习算法工程师

设为星标,干货直达!


目前在CV领域最火的除了transformer的应用外,就属于自监督学习了(Self-Supervised Learning),日前Facebook AI开源了基于PyTorch的自监督学习库VISSL,目前该库实现了自监督学习的SOTA,并且给出了详细的使用教程

https://github.com/facebookresearch/vissl

VISSL助力推动CV领域自监督学习的发展,FAIR官方出品,特点突出:

  • Reproducible implementation of SOTA in Self-Supervision: All existing SOTA in Self-Supervision are implemented - SwAV, SimCLR, MoCo(v2), PIRL, NPID, NPID++, DeepClusterV2, ClusterFit, RotNet, Jigsaw. Also supports supervised trainings.

  • Benchmark suite: Variety of benchmarks tasks including linear image classification (places205, imagenet1k, voc07), full finetuning, semi-supervised benchmark, nearest neighbor benchmark, object detection (Pascal VOC and COCO).

  • Ease of Usability: easy to use using yaml configuration system based on Hydra.

  • Modular: Easy to design new tasks and reuse the existing components from other tasks (objective functions, model trunk and heads, data transforms, etc.). The modular components are simple drop-in replacements in yaml config files.

  • Scalability: Easy to train model on 1-gpu, multi-gpu and multi-node. Several components for large scale trainings provided as simple config file plugs: Activation checkpointing, ZeRO, FP16, LARC, Stateful data sampler, data class to handle invalid images, large model backbones like RegNets, etc.

  • Model Zoo: Over 60 pre-trained self-supervised model weights.

具体的Model Zoo见这里(真的是超级详细):
https://github.com/facebookresearch/vissl/blob/master/MODEL_ZOO.md


另外,VISLL还给出了详细的文档以及教程:

Get started with VISSL by trying one of the Colab tutorial notebooks.

  • Train SimCLR on 1-gpu

  • Extracting Features from a pretrained model

  • Benchmark task: Full finetuning on ImageNet-1K

  • Benchmark task: Linear image classification on ImageNet-1K

  • Large scale training (fp16, LARC, ZeRO)


其实除了VISSL外,商汤较早前也在mmcv家族中开源了自监督训练库OpenSelfUp:

https://github.com/open-mmlab/OpenSelfSup

目前也实现了很多的自监督学习模型:

OpenSelfUp和VISLL到底哪个更好用,还需要上手测试,但是背后两者肯定都和自家的目标检测库相关联,即mmdetection和detectron2。不管怎样,期待这个领域有更大的发展!

·················END·················




推荐阅读

PyTorch 源码解读之 torch.autograd

涨点神器FixRes:两次超越ImageNet数据集上的SOTA

Transformer为何能闯入CV界秒杀CNN?

SWA:让你的目标检测模型无痛涨点1% AP

CondInst:性能和速度均超越Mask RCNN的实例分割模型

centerX: 用新的视角的方式打开CenterNet

mmdetection最小复刻版(十一):概率Anchor分配机制PAA深入分析

MMDetection新版本V2.7发布,支持DETR,还有YOLOV4在路上!

CNN:我不是你想的那样

TF Object Detection 终于支持TF2了!

无需tricks,知识蒸馏提升ResNet50在ImageNet上准确度至80%+

不妨试试MoCo,来替换ImageNet上pretrain模型!

重磅!一文深入深度学习模型压缩和加速

从源码学习Transformer!

mmdetection最小复刻版(七):anchor-base和anchor-free差异分析

mmdetection最小复刻版(四):独家yolo转化内幕


机器学习算法工程师


                                    一个用心的公众号


 




浏览 128
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报
评论
图片
表情
推荐
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报