Neural-Network-Introduction神经网络介绍大学毕业论文外文文献翻译及原文
上传者:索红|上传时间:2017-06-03|密次下载
Neural-Network-Introduction神经网络介绍大学毕业论文外文文献翻译及原文
毕 业 设 计(论文)
外 文 文 献 翻 译
文献、资料中文题目:神经网络介绍
文献、资料英文题目:Neural Network Introduction 文献、资料来源:
文献、资料发表(出版)日期:
院 (部):
专 业:
班 级:
姓 名:
学 号:
指导教师:
翻译日期: 2017.02.14
外文文献翻译
注:节选自Neural Network Introduction神经网络介绍,绪论。
History
The history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John
Anderson and Edward Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction that puts the paper in historical perspective.
Histories of some of the main neural network contributors are included at the
beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.
At least two ingredients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept, a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they designed experiments to study its pumping action. These experiments revolutionized our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.
Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathematics necessary for the reconstruction of images from computer-aided topography (CAT) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a useful CAT system.
The history of neural networks has progressed through both conceptual innovations and implementation developments. These advancements, however, seem to have occurred in fits and starts rather than by steady evolution.
Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily of interdisciplinary work in
physics, psychology and neurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation.
The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [McPi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the
neural network field.
McCulloch and Pitts were followed by Donald Hebb [Hebb49], who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a mechanism for learning in biological neurons.
The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learning rule by Frank Rosenblatt [Rose58]. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural network research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Rosenblatt and the perception learning rule.)
At about the same time, Bernard Widrow and Ted Hoff [WiHo60] introduced a new learning algorithm and used it to train adaptive linear neural networks, which were
similar in structure and capability to Rosenblatt’s perception. The Widrow Hoff learning rule is still in use today. (See Chapter 10 for more on Widrow-Hoff learning.)
Unfortunately, both Rosenblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert [MiPa69]. Rosenblatt and Widrow were
aware of these limitations and proposed new networks that would overcome them.
However, they were not able to successfully modify their learning algorithms to train the more complex networks.
Many people, influenced by Minsky and Papert, believed that further research on neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment,
caused many researchers to leave the field. For a decade neural network research was largely suspended. Some important work, however, did continue during the 1970s. In 1972 Teuvo Kohonen [Koho72] and James Anderson [Ande72] independently and
separately developed new neural networks that could act as memories. Stephen Grossberg
[Gros76] was also very active during this period in the investigation of self-organizing networks.
Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which to experiment. During the 1980s both of these impediments were overcome, and research
in neural networks increased dramatically. New personal computers and
workstations, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced.
Two new concepts were most responsible for the rebirth of neural net works. The first was the use of statistical mechanics to explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfield [Hopf82].
The second key development of the 1980s was the backpropagation algo rithm for training multilayer perceptron networks, which was discovered independently by several different researchers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland [RuMc86]. This algorithm was the
answer to the criticisms Minsky and Papert had made in the 1960s. (See Chapters 11 and 12 for a development of the backpropagation algorithm.)
These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As noted below, it is not clear where all of this will lead US.
The brief historical account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in the neural
network field has progressed. As one might note, the progress has not always been slow
but sure. There have been periods of dramatic progress and periods when relatively little
has been accomplished.
Many of the advances in neural networks have had to do with new concepts, such as
innovative architectures and training. Just as important has been the availability of
powerful new computers on which to test these new concepts. Well, so much for the history of neural networks to this date. The real question is,
What will happen in the next ten to twenty years? Will neural networks take a
permanent place as a mathematical/engineering tool, or will they fade away as have so
many promising technologies? At present, the answer seems to be that neural networks
will not only have their day but will have a permanent place, not as a solution to every
problem, but as a tool to be used in appropriate situations. In addition, remember that we
still know very little about how the brain works. The most important advances in neural
networks almost certainly lie in the future.
Although it is difficult to predict the future success of neural networks, the large
number and wide variety of applications of this new technology are very encouraging.
The next section describes some of these applications.
Applications
A recent newspaper article described the use of neural networks in literature
research by Aston University. It stated that the network can be taught to recognize
individual writing styles, and the researchers used it to compare works attributed to
Shakespeare and his contemporaries. A popular science television program recently
documented the use of neural networks by an Italian research institute to test the purity of
olive oil. These examples are indicative of the broad range of applications that can be
found for neural networks. The applications are expanding because neural networks are
good at solving problems, not just in engineering, science and mathematics, but m
medicine, business, finance and literature as well. Their application to a wide variety of
problems in many fields makes them very attractive. Also, faster computers and faster
algorithms have made it possible to use neural networks to solve complex industrial
problems that formerly required too much computation.
The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MATLAB with the permission of the Math Works, Inc.
The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning with the adaptive channel equalizer in about 1984. This device, which is an outstanding commercial success, is a single-neuron network used in long distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier and a risk analysis system.
Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the literature follows.
Aerospace
High performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectors
Automotive
Automobile automatic guidance systems, warranty activity analyzers
Banking
Check and other document readers, credit application evaluators
Defense
Weapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification
Electronics
Code sequence prediction, integrated circuit chip layout, process control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling
Entertainment
Animation, special effects, market forecasting
下载文档
热门试卷
- 2016年四川省内江市中考化学试卷
- 广西钦州市高新区2017届高三11月月考政治试卷
- 浙江省湖州市2016-2017学年高一上学期期中考试政治试卷
- 浙江省湖州市2016-2017学年高二上学期期中考试政治试卷
- 辽宁省铁岭市协作体2017届高三上学期第三次联考政治试卷
- 广西钦州市钦州港区2016-2017学年高二11月月考政治试卷
- 广西钦州市钦州港区2017届高三11月月考政治试卷
- 广西钦州市钦州港区2016-2017学年高一11月月考政治试卷
- 广西钦州市高新区2016-2017学年高二11月月考政治试卷
- 广西钦州市高新区2016-2017学年高一11月月考政治试卷
- 山东省滨州市三校2017届第一学期阶段测试初三英语试题
- 四川省成都七中2017届高三一诊模拟考试文科综合试卷
- 2017届普通高等学校招生全国统一考试模拟试题(附答案)
- 重庆市永川中学高2017级上期12月月考语文试题
- 江西宜春三中2017届高三第一学期第二次月考文科综合试题
- 内蒙古赤峰二中2017届高三上学期第三次月考英语试题
- 2017年六年级(上)数学期末考试卷
- 2017人教版小学英语三年级上期末笔试题
- 江苏省常州西藏民族中学2016-2017学年九年级思想品德第一学期第二次阶段测试试卷
- 重庆市九龙坡区七校2016-2017学年上期八年级素质测查(二)语文学科试题卷
- 江苏省无锡市钱桥中学2016年12月八年级语文阶段性测试卷
- 江苏省无锡市钱桥中学2016-2017学年七年级英语12月阶段检测试卷
- 山东省邹城市第八中学2016-2017学年八年级12月物理第4章试题(无答案)
- 【人教版】河北省2015-2016学年度九年级上期末语文试题卷(附答案)
- 四川省简阳市阳安中学2016年12月高二月考英语试卷
- 四川省成都龙泉中学高三上学期2016年12月月考试题文科综合能力测试
- 安徽省滁州中学2016—2017学年度第一学期12月月考高三英语试卷
- 山东省武城县第二中学2016.12高一年级上学期第二次月考历史试题(必修一第四、五单元)
- 福建省四地六校联考2016-2017学年上学期第三次月考高三化学试卷
- 甘肃省武威第二十三中学2016—2017学年度八年级第一学期12月月考生物试卷
网友关注
- 张力效果下的人性悖论爱伦坡乌鸦细读
- 巴蜀历史大事记
- 尼安德特人的朋友圈
- 被历史埋没,但这丝毫不影响他的伟大
- 中国共产党领导多党合作的历史经验与启示
- 遵守礼规才能张扬个性
- 世界读书日与读书
- 民族团结与反分裂服务团
- 用话语基调理论浅析爱伦_坡诗歌_乌鸦
- 论我国多党合作制度的特点和基本经验
- 民族地区政府购买NGO服务研究
- 世界文学史上50大恶人角色
- 中国共产党领导的多党合作制度的形成与发展
- 夫妻是缘
- 胡同文化
- 云南省红军长征过扎西理论学术研讨会纪要
- 名流合读范光陵编孔子孝经及中庸以振兴道德
- 《黄帝内经》的生命伦理思想研究
- 雅思口语扣分细则都有哪些
- 基于马克思理论视角的当代资本主义经济危机解读及启示
- 阅读,让生活更丰盈
- 人生智慧
- 周易预测2015羊年股市
- 公共权力腐败的根源探析_戴昌桥
- 在怀疑的时代依然需要信仰
- 关于住房公积金个人住房贷款有关事项的通知
- 解放战争时期晋绥边区物价问题研究_张晓玲
- 关于清正廉洁的古今名言警句
- 冬日太行最美冰瀑
- 宗教
网友关注视频
- 《空中课堂》二年级下册 数学第一单元第1课时
- 沪教版牛津小学英语(深圳用) 四年级下册 Unit 2
- 19 爱护鸟类_第一课时(二等奖)(桂美版二年级下册)_T3763925
- 【部编】人教版语文七年级下册《过松源晨炊漆公店(其五)》优质课教学视频+PPT课件+教案,辽宁省
- 3.2 数学二年级下册第二单元 表内除法(一)整理和复习 李菲菲
- 外研版英语三起5年级下册(14版)Module3 Unit2
- 【部编】人教版语文七年级下册《老山界》优质课教学视频+PPT课件+教案,安徽省
- 8.对剪花样_第一课时(二等奖)(冀美版二年级上册)_T515402
- 30.3 由不共线三点的坐标确定二次函数_第一课时(市一等奖)(冀教版九年级下册)_T144342
- 冀教版小学数学二年级下册第二单元《余数和除数的关系》
- 苏科版数学 八年级下册 第八章第二节 可能性的大小
- 沪教版牛津小学英语(深圳用) 六年级下册 Unit 7
- 第8课 对称剪纸_第一课时(二等奖)(沪书画版二年级上册)_T3784187
- 二年级下册数学第三课 搭一搭⚖⚖
- 人教版二年级下册数学
- 沪教版牛津小学英语(深圳用) 四年级下册 Unit 3
- 【部编】人教版语文七年级下册《老山界》优质课教学视频+PPT课件+教案,安徽省
- 【部编】人教版语文七年级下册《泊秦淮》优质课教学视频+PPT课件+教案,辽宁省
- 苏科版数学七年级下册7.2《探索平行线的性质》
- 七年级下册外研版英语M8U2reading
- 8.练习八_第一课时(特等奖)(苏教版三年级上册)_T142692
- 外研版英语七年级下册module3 unit2第二课时
- 外研版英语七年级下册module3 unit2第一课时
- 沪教版牛津小学英语(深圳用) 五年级下册 Unit 10
- 【部编】人教版语文七年级下册《过松源晨炊漆公店(其五)》优质课教学视频+PPT课件+教案,江苏省
- 每天日常投篮练习第一天森哥打卡上脚 Nike PG 2 如何调整运球跳投手感?
- 北师大版小学数学四年级下册第15课小数乘小数一
- 小学英语单词
- 精品·同步课程 历史 八年级 上册 第15集 近代科学技术与思想文化
- 苏科版八年级数学下册7.2《统计图的选用》
精品推荐
- 2016-2017学年高一语文人教版必修一+模块学业水平检测试题(含答案)
- 广西钦州市高新区2017届高三11月月考政治试卷
- 浙江省湖州市2016-2017学年高一上学期期中考试政治试卷
- 浙江省湖州市2016-2017学年高二上学期期中考试政治试卷
- 辽宁省铁岭市协作体2017届高三上学期第三次联考政治试卷
- 广西钦州市钦州港区2016-2017学年高二11月月考政治试卷
- 广西钦州市钦州港区2017届高三11月月考政治试卷
- 广西钦州市钦州港区2016-2017学年高一11月月考政治试卷
- 广西钦州市高新区2016-2017学年高二11月月考政治试卷
- 广西钦州市高新区2016-2017学年高一11月月考政治试卷
分类导航
- 互联网
- 电脑基础知识
- 计算机软件及应用
- 计算机硬件及网络
- 计算机应用/办公自动化
- .NET
- 数据结构与算法
- Java
- SEO
- C/C++资料
- linux/Unix相关
- 手机开发
- UML理论/建模
- 并行计算/云计算
- 嵌入式开发
- windows相关
- 软件工程
- 管理信息系统
- 开发文档
- 图形图像
- 网络与通信
- 网络信息安全
- 电子支付
- Labview
- matlab
- 网络资源
- Python
- Delphi/Perl
- 评测
- Flash/Flex
- CSS/Script
- 计算机原理
- PHP资料
- 数据挖掘与模式识别
- Web服务
- 数据库
- Visual Basic
- 电子商务
- 服务器
- 搜索引擎优化
- 存储
- 架构
- 行业软件
- 人工智能
- 计算机辅助设计
- 多媒体
- 软件测试
- 计算机硬件与维护
- 网站策划/UE
- 网页设计/UI
- 网吧管理