机器学习领域最全综述列表!
机器学习实验室
共 10053字,需浏览 21分钟
· 2020-10-12
Author:kaiyuan
From:NewBeeNLP
另外发现源repo中NLP相关的综述不是很多,于是把一些觉得还不错的文章添加进去了,重新整理更新在 AI-Surveys[1] 中。
ml-surveys: https://github.com/eugeneyan/ml-surveys AI-Surveys: https://github.com/KaiyuanGao/AI-Surveys
自然语言处理
深度学习:Recent Trends in Deep Learning Based Natural Language Processing[2] 文本分类:Deep Learning Based Text Classification: A Comprehensive Review[3] 文本生成:Survey of the SOTA in Natural Language Generation: Core tasks, applications and evaluation[4] 文本生成:Neural Language Generation: Formulation, Methods, and Evaluation[5] 迁移学习:Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer[6] (Paper[7]) 迁移学习:Neural Transfer Learning for Natural Language Processing[8] 知识图谱:A Survey on Knowledge Graphs: Representation, Acquisition and Applications[9] 命名实体识别:A Survey on Deep Learning for Named Entity Recognition[10] 关系抽取:More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction[11] 情感分析:Deep Learning for Sentiment Analysis : A Survey[12] ABSA情感分析:Deep Learning for Aspect-Level Sentiment Classification: Survey, Vision, and Challenges[13] 文本匹配:Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering[14] 阅读理解:Neural Reading Comprehension And Beyond[15] 阅读理解:Neural Machine Reading Comprehension: Methods and Trends[16] 机器翻译:Neural Machine Translation: A Review[17] 机器翻译:A Survey of Domain Adaptation for Neural Machine Translation[18] 预训练模型:Pre-trained Models for Natural Language Processing: A Survey[19] 注意力机制:An Attentive Survey of Attention Models[20] 注意力机制:An Introductory Survey on Attention Mechanisms in NLP Problems[21] 注意力机制:Attention in Natural Language Processing[22] BERT:A Primer in BERTology: What we know about how BERT works[23] Beyond Accuracy: Behavioral Testing of NLP Models with CheckList[24] Evaluation of Text Generation: A Survey[25]
推荐系统
Recommender systems survey[26] Deep Learning based Recommender System: A Survey and New Perspectives[27] Are We Really Making Progress? A Worrying Analysis of Neural Recommendation Approaches[28] A Survey of Serendipity in Recommender Systems[29] Diversity in Recommender Systems – A survey[30] A Survey of Explanations in Recommender Systems[31]
深度学习
A State-of-the-Art Survey on Deep Learning Theory and Architectures[32] 知识蒸馏:Knowledge Distillation: A Survey[33] 模型压缩:Compression of Deep Learning Models for Text: A Survey[34] 迁移学习:A Survey on Deep Transfer Learning[35] 神经架构搜索:A Comprehensive Survey of Neural Architecture Search-- Challenges and Solutions[36] 神经架构搜索:Neural Architecture Search: A Survey[37]
计算机视觉
目标检测:Object Detection in 20 Years[38] 对抗性攻击:Threat of Adversarial Attacks on Deep Learning in Computer Vision[39] 自动驾驶:Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art[40]
强化学习
A Brief Survey of Deep Reinforcement Learning[41] Transfer Learning for Reinforcement Learning Domains[42] Review of Deep Reinforcement Learning Methods and Applications in Economics[43]
Embeddings
图:A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications[44] 文本:From Word to Sense Embeddings:A Survey on Vector Representations of Meaning[45] 文本:Diachronic Word Embeddings and Semantic Shifts[46] 文本:Word Embeddings: A Survey[47] A Survey on Contextual Embeddings[48]
Meta-learning & Few-shot Learning
A Survey on Knowledge Graphs: Representation, Acquisition and Applications[49] Meta-learning for Few-shot Natural Language Processing: A Survey[50] Learning from Few Samples: A Survey[51] Meta-Learning in Neural Networks: A Survey[52] A Comprehensive Overview and Survey of Recent Advances in Meta-Learning[53] Baby steps towards few-shot learning with multiple semantics[54] Meta-Learning: A Survey[55] A Perspective View And Survey Of Meta-learning[56]
迁移学习
A Survey on Transfer Learning[57]
参考资料
[2]Recent Trends in Deep Learning Based Natural Language Processing: https://arxiv.org/pdf/1708.02709.pdf
[3]Deep Learning Based Text Classification: A Comprehensive Review: https://arxiv.org/pdf/2004.03705
[4]Survey of the SOTA in Natural Language Generation: Core tasks, applications and evaluation: https://www.jair.org/index.php/jair/article/view/11173/26378
[5]Neural Language Generation: Formulation, Methods, and Evaluation: https://arxiv.org/pdf/2007.15780.pdf
[6]Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer: https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html
[7]Paper: https://arxiv.org/abs/1910.10683
[8]Neural Transfer Learning for Natural Language Processing: https://aran.library.nuigalway.ie/handle/10379/15463
[9]A Survey on Knowledge Graphs: Representation, Acquisition and Applications: https://arxiv.org/abs/2002.00388
[10]A Survey on Deep Learning for Named Entity Recognition: https://arxiv.org/abs/1812.09449
[11]More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction: https://arxiv.org/abs/2004.03186
[12]Deep Learning for Sentiment Analysis : A Survey: https://arxiv.org/abs/1801.07883
[13]Deep Learning for Aspect-Level Sentiment Classification: Survey, Vision, and Challenges: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8726353
[14]Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering: https://www.aclweb.org/anthology/C18-1328/
[15]Neural Reading Comprehension And Beyond: https://stacks.stanford.edu/file/druid:gd576xb1833/thesis-augmented.pdf
[16]Neural Machine Reading Comprehension: Methods and Trends: https://arxiv.org/abs/1907.01118
[17]Neural Machine Translation: A Review: https://arxiv.org/abs/1912.02047
[18]A Survey of Domain Adaptation for Neural Machine Translation: https://www.aclweb.org/anthology/C18-1111.pdf
[19]Pre-trained Models for Natural Language Processing: A Survey: https://arxiv.org/abs/2003.08271
[20]An Attentive Survey of Attention Models: https://arxiv.org/pdf/1904.02874.pdf
[21]An Introductory Survey on Attention Mechanisms in NLP Problems: https://arxiv.org/abs/1811.05544
[22]Attention in Natural Language Processing: https://arxiv.org/abs/1902.02181
[23]A Primer in BERTology: What we know about how BERT works: https://arxiv.org/pdf/2002.12327.pdf
[24]
Beyond Accuracy: Behavioral Testing of NLP Models with CheckList: https://arxiv.org/pdf/2005.04118.pdf
[25]Evaluation of Text Generation: A Survey: https://arxiv.org/pdf/2006.14799.pdf[26]Recommender systems survey: http://irntez.ir/wp-content/uploads/2016/12/sciencedirec.pdf
[27]Deep Learning based Recommender System: A Survey and New Perspectives: https://arxiv.org/pdf/1707.07435.pdf
[28]Are We Really Making Progress? A Worrying Analysis of Neural Recommendation Approaches: https://arxiv.org/pdf/1907.06902.pdf
[29]A Survey of Serendipity in Recommender Systems: https://www.researchgate.net/publication/306075233_A_Survey_of_Serendipity_in_Recommender_Systems
[30]Diversity in Recommender Systems – A survey: https://papers-gamma.link/static/memory/pdfs/153-Kunaver_Diversity_in_Recommender_Systems_2017.pdf
[31]A Survey of Explanations in Recommender Systems: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.418.9237&rep=rep1&type=pdf
[32]A State-of-the-Art Survey on Deep Learning Theory and Architectures: https://www.mdpi.com/2079-9292/8/3/292/htm
[33]Knowledge Distillation: A Survey: https://arxiv.org/pdf/2006.05525.pdf
[34]Compression of Deep Learning Models for Text: A Survey: https://arxiv.org/pdf/2008.05221.pdf
[35]A Survey on Deep Transfer Learning: https://arxiv.org/pdf/1808.01974.pdf
[36]A Comprehensive Survey of Neural Architecture Search-- Challenges and Solutions: https://arxiv.org/abs/2006.02903
[37]Neural Architecture Search: A Survey: https://arxiv.org/abs/1808.05377
[38]Object Detection in 20 Years: https://arxiv.org/pdf/1905.05055.pdf
[39]Threat of Adversarial Attacks on Deep Learning in Computer Vision: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8294186
[40]Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art: https://arxiv.org/pdf/1704.05519.pdf
[41]A Brief Survey of Deep Reinforcement Learning: https://arxiv.org/pdf/1708.05866.pdf
[42]Transfer Learning for Reinforcement Learning Domains: http://www.jmlr.org/papers/volume10/taylor09a/taylor09a.pdf
[43]Review of Deep Reinforcement Learning Methods and Applications in Economics: https://arxiv.org/pdf/2004.01509.pdf
[44]A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications: https://arxiv.org/pdf/1709.07604
[45]From Word to Sense Embeddings:A Survey on Vector Representations of Meaning: https://www.jair.org/index.php/jair/article/view/11259/26454
[46]Diachronic Word Embeddings and Semantic Shifts: https://arxiv.org/pdf/1806.03537.pdf
[47]Word Embeddings: A Survey: https://arxiv.org/abs/1901.09069
[48]A Survey on Contextual Embeddings: https://arxiv.org/abs/2003.07278
[49]A Survey on Knowledge Graphs: Representation, Acquisition and Applications: https://arxiv.org/abs/2002.00388
[50]Meta-learning for Few-shot Natural Language Processing: A Survey: https://arxiv.org/abs/2007.09604
[51]Learning from Few Samples: A Survey: https://arxiv.org/abs/2007.15484
[52]Meta-Learning in Neural Networks: A Survey: https://arxiv.org/abs/2004.05439
[53]A Comprehensive Overview and Survey of Recent Advances in Meta-Learning: https://arxiv.org/abs/2004.11149
[54]Baby steps towards few-shot learning with multiple semantics: https://arxiv.org/abs/1906.01905
[55]Meta-Learning: A Survey: https://arxiv.org/abs/1810.03548
[56]A Perspective View And Survey Of Meta-learning: https://www.researchgate.net/publication/2375370_A_Perspective_View_And_Survey_Of_Meta-Learning
[57]A Survey on Transfer Learning: http://202.120.39.19:40222/wp-content/uploads/2018/03/A-Survey-on-Transfer-Learning.pdf
往期精彩:
喜欢您就点个在看!
评论
科普:深度学习训练,不同预算GPU选购指南
以下文章来源于微信公众号:DeepHub IMBA作者:Mike Clayton本文仅用于学术分享,如有侵权,请联系后台作删文处理导读购买显卡第一个要考虑的问题是什么?当然是预算。本文提供了不同预算的显卡选购指南,希望能对各位读者有所帮助。在进行机器学习项目时,特别是在处理深度学习和神经网络时,最好
机器学习初学者
0
【深度学习】人人都能看懂的LSTM
熟悉深度学习的朋友知道,LSTM是一种RNN模型,可以方便地处理时间序列数据,在NLP等领域有广泛应用。在看了台大李宏毅教授的深度学习视频后,特别是介绍的第一部分RNN以及LSTM,整个人醍醐灌顶。本文就是对视频的记录加上了一些个人的思考。0. 从RNN说起循环神经网络(Recurrent Neur
机器学习初学者
0
学习开放日:开放复杂科学、AI+X 海量学习资源!
Datawhale干货 学习开放日:4月27-28日1. 什么是学习开放日?以AI为代表的技术突飞猛进,人类知识森林快速扩张,仅凭一人之力不仅难以覆盖,更是难以串联知识线索。唯有像蚂蚁探索最优路径一样,我们才能在信息爆炸的知识森林中探索出更好的方向!因此,今年集智斑图联合国内最
Datawhale
1
【深度学习】图解自注意力机制(Self-Attention)
一、注意力机制和自注意力机制的区别Attention机制与Self-Attention机制的区别传统的Attention机制发生在Target的元素和Source中的所有元素之间。简单讲就是说Attention机制中的权重的计算需要Target来参与。即在Encoder-Decoder 模型中,At
机器学习初学者
0
微服务与领域驱动设计,架构实践总结
来源:知了一笑👉 欢迎加入小哈的星球 ,你将获得: 专属的项目实战 / Java 学习路线 / 一对一提问 / 学习打卡 / 赠书福利全栈前后端分离博客项目 2.0 版本完结啦, 演示链接:http://116.62.199.48/ ,新
小哈学Java
0
Python列表知识应知应会
点击上方“Go语言进阶学习”,进行关注回复“Go语言”即可获赠从入门到进阶共10本电子书今日鸡汤只在此山中,云深不知处。一、前言 在Python程序开发中,列表(List)经常会使用。假设一个班里有50个学生现需要统计每一个学生的总成绩情况,如果不使用列
Go语言进阶学习
0
AI智能视觉检测技术在工业级测量领域的创新应用--AMB Tube-Q导管数字化测量系统
技术背景在深度学习算法出来之前,对于视觉算法来说,大致可以分为以下5个步骤:特征感知,图像预处理,特征提取,特征筛选,推理预测与识别。早期的机器学习中,占优势的统计机器学习群体中,对特征是不大关心的。深度学习是机器学习技术的一个方面,由人工神经网络提供支持。深度学习技术的工作原理是教机器通过实例学习
机器视觉
0
文末送书 | 大模型时代下如何学习云原生
《containerd 原理剖析与实战》新书内购中,点击阅读原文,限时 69.9 元购买。文末免费赠书大模型与云原生近年来,大语言模型的热度可谓是愈发高涨,尤其是今年年初 Sora 的出现,更是让全球再次看到了AIGC 的巨大威力。Sora 生成实例视频---几头巨大的长毛猛犸踏着积雪的草地而来在当
云原生实验室
10