干货 | 数据库专家C.Mohan——人工智能的前世今生

共 10140字,需浏览 21分钟

 ·

2021-04-06 13:15

作者:C.Mohan

本文约1700字,建议阅读5分钟
清华大学软件学院杰出访问教授C.Mohan为您讲解人工智能的前世今生。

C.Mohan


The first question I’d like to discuss is “what is artificial intelligence (AI)?” There is more to AI than just machine or deep learning. Actually, AI is a more comprehensive concept that includes machine learning as a part, which neural networks affiliate to, and deep learning is a much smaller area inside neural networks.

What is AI? There is an opinion that AI is code plus possibly special-purpose hardware. Where will have AI? The answer is wherever there is software. From the view of DARPA, AI is a programmed ability to process information. On a notional intelligence scale, ideal AI can show high capability in perceiving (rich, complex and subtle information) and reasoning (to plan and to decide), and relatively high learning ability to learn within an environment, however, no abstracting.

AI has been hyped these years, it is really an umbrella term for a set of related technologies. During the development of AI, some significant people and places cannot be ignored. Among the ACM Turing Awards for AI, there are 10 prize-winners in total, from 1969 Marvin Minsky for his central role in creating, shaping, promoting and advancing the field of AI to the latest Youshua Bengio, Geoffrey Hinton and Yann LeCun in 2018, they were awarded mutually for their conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

Till now there has been three waves of AI revolution.AI had born in 1956’s Dartmouth Summer Workshop. In the first decade of AI, there were mainly focusing on Heuristic search for problem solving, syntactic computational linguistics, checker playing program. From 1965 to 1990,knowledge-based systems – expect systems’ goals to match even exceed humans – started to boom. During the first wave of AI, Engineers just create sets of rules to represent knowledge in well-defined domains. The structure of the knowledge is defined by humans, the specifics are explored by the machine. AI 1.0 was focused on use of knowledge in problem solving. An intelligent system must learn from experience, use vast amount of knowledge, tolerate error and ambiguity, respond in real time, communicate with humans using natural language. Besides, search compensates for lack of knowledge (e.g. puzzles) and knowledge compensates for lack of search (e.g. F=ma). traditional sources of knowledge (including formal knowledge that learnt in schools, books and manuals; informal knowledge like heuristic form people).

Major breakthroughs in AI in 20th century are enabled by brute-force, heuristics, human coding of rules and knowledge, and simple machine learning (pattern recognition),such as world champion chess machine(IBM Deep Blue) eta.

When comes to AI 2.0, people in India, Africa & South America more like Chinese with respect to economic status and lifestyles to pursue a higher power or influence in the world. This time we focused on facial recognition & AI chips. AI has succeed way beyond past expectations of people: it has made it in the availability of vast amounts of data to train Machine learning algorithms, and special purpose hardware. The impact of AI is pervasive in so many application scenarios like self-driving, industrial/household robots, voice-based assistants etc.

Expert Automation and Augmentation software has emerged. Compare to AI 1.0, 2.0 extract or use knowledge from new data driven knowledge sources. Data driven science has become the 4th paradigm besides experiment, theory and simulation. It has also created next generation AI systems-data driven AI systems, which made previously unavailable sources of data possible to dig knowledge from. What’s more, automatic discovery of new knowledge come into reality by machine learning or deep learning framework. In this time, engineers create statistical models for specific problem domains and train them on big data. AI 2.0 represents a high level of perceiving and learning though comparatively lower abstracting and reasoning. They are known for excellent nuanced classification and prediction capabilities but no contextual capability and minimal reasoning ability.

In the next part, I will introduce the concept of artificial neural networks. It started in 1950s,with a simplified structure of three layers: Input layer, multiple hidden layers process hierarchical features and output layers. A structural neural net works in a hidden process to finish specific task. For a character or object recognition, they started by decomposing into different feature maps to perform a local analysis over the whole input space, then through convolutions and subsampling again and again, they finally go to fully-connected layers performing global analysis. By a triangular composition of data, algorithm and compute, deep learning technology iterating overly,in the last few years, the accuracy in image recognition even reach an error rates lower than human being. Nevertheless, this kind of AI belongs to narrow AI which only partially works incredibly well on single problems: language translation, speech transcription, language processing and visual recognition. In the second wave, challenges are also obvious: trivial targeted distortion may cause completely distinct results, internet trolls also lead the AI bot to act offensively.

In the Bosch AI con 2021,it focused on several
topics,the first I’d like to address is AIoT Product Development,data driven engineering does provide a pretty acceptable logic in production upgrade: it follows a self-reinforcing cycle around a product consisting of data collection, machine learning and then development. The second is AI models for physical products. To be honest, AI application still faces tricky challenges today, such as large quan- tities of the “right” training data needed, ”curse of dimensionality” that data demand often grows exponentially with model size, and others like huge expenses, missing explainability and under-utilization of existing domain knowledge should be drew attention on.

But hybrid models provide a solution better than pure data driven models. It combines physics-based models which are data-efficient, causal, explainable, validated and generalizable and data-driven models. Both paradigms complement each other in powerful ways.

Principle architectures showed in the picture below.
 


Next topic is about AIoT Target State, for example, Bosch AIoT cycle provide a process from data flow to user, within hybrid AI Algorithms, value stream and product/servies.There is surely something special about enterprise AI: Legal and compliance, working model and integration. Firstly, data is confidential and subject of local regulations and contractual agreements, principal of data minimality applies and data cannot be openly shared and reused. Secondly, processes and data often fixed or difficult to change,data is moving. Besides, data might not be accessible, even more difficult to build software for other companies to run their processes. Finally, the integration. Not a separate “green field” task, integration into existing processes and application is key to reap benefits.

The 3rd wave of AI will be contextual adaptation. systems construct contextual explanatory models for classes of real world phenomena. At that time, AI would be equally skilled in perceiving, learning, abstracting and reasoning. Broad AI will reach to a satisfying degree in explainability, security, ethics, learn more from small data and infrastructure. In AI 3.0 time, compute requirements for large AI training jobs is doubling every 3.5 months, however this trend will be unsustainable without significant hardware and software innovation. What’s more, the performance will extend by 2.5x per year through 2025, performance or watt gains will be secured using approximate computing principles applied to 1)digital AI Cores with reduced precision,2)analog AI cores,3)analog AI cores plus optimized materials.

IBM invests a quite large amount of money into AI researches. It launches research collaboration center to drive next-generation AI hardware. And new AI hardware is trying to reduce precision scaling, from 2012 to 2021, smaller scaled training was rapidly adopted and commercialized.

Back to the question of AI, for enterprise, we have undergone a process from narrow to broad AI which includes advancing core AI and trusting AI through fairness, explainability, robustness and transparency,then we try to operationalizing AI at scale from trusting to scaling AI by managing, operating and automating its lifestyle.

The key research directions for AI for business
 
For trusted AI, four questions need to be perfectly answered: is it fair, easy to understand, secure and accountable? The last one is AI for AI, how to use AI to operationalize AI at scale. To achieve data automation, data science automation and automation of deployment & operations.

In the very last part, 8 biggest AI trends of 2020 from The Next Web are as follows:

1) AI will make healthcare more accurate and less costly
2) Explainability and trust will receive greater attention
3) AI will become less data-hungry
4) Improved accuracy and efficiency of neural networks
5) Automated AI development
6) AI in manufacturing
7) The geographical implications of AI
8) AI in drug discovery

As a summary, today’s content about AI could be divided into 3 parts, with advancing core AI, trusting AI through fairness, explainability, robustness, transparency, and scaling AI by managing, operating and automating its lifecycle. From a narrow AI(predominantly focusing on single task or domain with superhuman accuracy and speed for certain tasks),to more broad AI (diving into multi-tasks, learning from less data) to general AI (seamless cross-domain and broad autonomy).

编辑:于腾凯
浏览 33
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报
评论
图片
表情
推荐
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报