教育资源为主的文档平台

当前位置: 查字典文档网> 所有文档分类> 高等教育> 其它> Neural-Network-Introduction神经网络介绍大学毕业论文外文文献翻译及原文

Neural-Network-Introduction神经网络介绍大学毕业论文外文文献翻译及原文

  毕 业 设 计(论文)

  外 文 文 献 翻 译

  文献、资料中文题目:神经网络介绍

  文献、资料英文题目:Neural Network Introduction 文献、资料来源:

  文献、资料发表(出版)日期:

  院 (部):

  专 业:

  班 级:

  姓 名:

  学 号:

  指导教师:

  翻译日期: 2017.02.14

  外文文献翻译

  注:节选自Neural Network Introduction神经网络介绍,绪论。

  History

  The history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John

  Anderson and Edward Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction that puts the paper in historical perspective.

  Histories of some of the main neural network contributors are included at the

  beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.

  At least two ingredients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept, a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they designed experiments to study its pumping action. These experiments revolutionized our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.

  Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathematics necessary for the reconstruction of images from computer-aided topography (CAT) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a useful CAT system.

  The history of neural networks has progressed through both conceptual innovations and implementation developments. These advancements, however, seem to have occurred in fits and starts rather than by steady evolution.

  Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily of interdisciplinary work in

  physics, psychology and neurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation.

  The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [McPi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the

  neural network field.

  McCulloch and Pitts were followed by Donald Hebb [Hebb49], who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a mechanism for learning in biological neurons.

  The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learning rule by Frank Rosenblatt [Rose58]. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural network research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Rosenblatt and the perception learning rule.)

  At about the same time, Bernard Widrow and Ted Hoff [WiHo60] introduced a new learning algorithm and used it to train adaptive linear neural networks, which were

  similar in structure and capability to Rosenblatt’s perception. The Widrow Hoff learning rule is still in use today. (See Chapter 10 for more on Widrow-Hoff learning.)

  Unfortunately, both Rosenblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert [MiPa69]. Rosenblatt and Widrow were

  aware of these limitations and proposed new networks that would overcome them.

  However, they were not able to successfully modify their learning algorithms to train the more complex networks.

  Many people, influenced by Minsky and Papert, believed that further research on neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment,

  caused many researchers to leave the field. For a decade neural network research was largely suspended. Some important work, however, did continue during the 1970s. In 1972 Teuvo Kohonen [Koho72] and James Anderson [Ande72] independently and

  separately developed new neural networks that could act as memories. Stephen Grossberg

  [Gros76] was also very active during this period in the investigation of self-organizing networks.

  Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which to experiment. During the 1980s both of these impediments were overcome, and research

  in neural networks increased dramatically. New personal computers and

  workstations, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced.

  Two new concepts were most responsible for the rebirth of neural net works. The first was the use of statistical mechanics to explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfield [Hopf82].

  The second key development of the 1980s was the backpropagation algo rithm for training multilayer perceptron networks, which was discovered independently by several different researchers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland [RuMc86]. This algorithm was the

  answer to the criticisms Minsky and Papert had made in the 1960s. (See Chapters 11 and 12 for a development of the backpropagation algorithm.)

  These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As noted below, it is not clear where all of this will lead US.

  The brief historical account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in the neural

  network field has progressed. As one might note, the progress has not always been slow

  but sure. There have been periods of dramatic progress and periods when relatively little

  has been accomplished.

  Many of the advances in neural networks have had to do with new concepts, such as

  innovative architectures and training. Just as important has been the availability of

  powerful new computers on which to test these new concepts. Well, so much for the history of neural networks to this date. The real question is,

  What will happen in the next ten to twenty years? Will neural networks take a

  permanent place as a mathematical/engineering tool, or will they fade away as have so

  many promising technologies? At present, the answer seems to be that neural networks

  will not only have their day but will have a permanent place, not as a solution to every

  problem, but as a tool to be used in appropriate situations. In addition, remember that we

  still know very little about how the brain works. The most important advances in neural

  networks almost certainly lie in the future.

  Although it is difficult to predict the future success of neural networks, the large

  number and wide variety of applications of this new technology are very encouraging.

  The next section describes some of these applications.

  Applications

  A recent newspaper article described the use of neural networks in literature

  research by Aston University. It stated that the network can be taught to recognize

  individual writing styles, and the researchers used it to compare works attributed to

  Shakespeare and his contemporaries. A popular science television program recently

  documented the use of neural networks by an Italian research institute to test the purity of

  olive oil. These examples are indicative of the broad range of applications that can be

  found for neural networks. The applications are expanding because neural networks are

  good at solving problems, not just in engineering, science and mathematics, but m

  medicine, business, finance and literature as well. Their application to a wide variety of

  problems in many fields makes them very attractive. Also, faster computers and faster

  algorithms have made it possible to use neural networks to solve complex industrial

  problems that formerly required too much computation.

  The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MATLAB with the permission of the Math Works, Inc.

  The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning with the adaptive channel equalizer in about 1984. This device, which is an outstanding commercial success, is a single-neuron network used in long distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier and a risk analysis system.

  Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the literature follows.

  Aerospace

  High performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectors

  Automotive

  Automobile automatic guidance systems, warranty activity analyzers

  Banking

  Check and other document readers, credit application evaluators

  Defense

  Weapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification

  Electronics

  Code sequence prediction, integrated circuit chip layout, process control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling

  Entertainment

  Animation, special effects, market forecasting

版权声明:此文档由查字典文档网用户提供,如用于商业用途请与作者联系,查字典文档网保持最终解释权!

下载文档

热门试卷

2016年四川省内江市中考化学试卷
广西钦州市高新区2017届高三11月月考政治试卷
浙江省湖州市2016-2017学年高一上学期期中考试政治试卷
浙江省湖州市2016-2017学年高二上学期期中考试政治试卷
辽宁省铁岭市协作体2017届高三上学期第三次联考政治试卷
广西钦州市钦州港区2016-2017学年高二11月月考政治试卷
广西钦州市钦州港区2017届高三11月月考政治试卷
广西钦州市钦州港区2016-2017学年高一11月月考政治试卷
广西钦州市高新区2016-2017学年高二11月月考政治试卷
广西钦州市高新区2016-2017学年高一11月月考政治试卷
山东省滨州市三校2017届第一学期阶段测试初三英语试题
四川省成都七中2017届高三一诊模拟考试文科综合试卷
2017届普通高等学校招生全国统一考试模拟试题(附答案)
重庆市永川中学高2017级上期12月月考语文试题
江西宜春三中2017届高三第一学期第二次月考文科综合试题
内蒙古赤峰二中2017届高三上学期第三次月考英语试题
2017年六年级(上)数学期末考试卷
2017人教版小学英语三年级上期末笔试题
江苏省常州西藏民族中学2016-2017学年九年级思想品德第一学期第二次阶段测试试卷
重庆市九龙坡区七校2016-2017学年上期八年级素质测查(二)语文学科试题卷
江苏省无锡市钱桥中学2016年12月八年级语文阶段性测试卷
江苏省无锡市钱桥中学2016-2017学年七年级英语12月阶段检测试卷
山东省邹城市第八中学2016-2017学年八年级12月物理第4章试题(无答案)
【人教版】河北省2015-2016学年度九年级上期末语文试题卷(附答案)
四川省简阳市阳安中学2016年12月高二月考英语试卷
四川省成都龙泉中学高三上学期2016年12月月考试题文科综合能力测试
安徽省滁州中学2016—2017学年度第一学期12月月考​高三英语试卷
山东省武城县第二中学2016.12高一年级上学期第二次月考历史试题(必修一第四、五单元)
福建省四地六校联考2016-2017学年上学期第三次月考高三化学试卷
甘肃省武威第二十三中学2016—2017学年度八年级第一学期12月月考生物试卷

网友关注

把发展纺织服装业作为新疆战略举措
柒牌服装品牌秋冬产品陈列手册
2013-2018年我国休闲服装市场发展趋势及销售投资战略分
宁波纺织服装业历久弥“新”
日晒老化机的实验参考标准
lK-1900高速电子加固缝纫机使用说明
西裤结构研究
我国服装企业基于生活方法的销售终端计谋研究
智米更登服饰 服装店面陈列培训2
纺织服装产品工业碳足迹核算中若干关键问题的研究
Type B Weaving Capacity Model - Ver6-Weaving Sections:B型织造能力模型ver6交织区
服装行业研究报告(ppt)
沪港通通车在即 证券行业迎来一大利好
Gellowen 手动摩擦色牢度测试仪实验
2011年消费品专题研讨会之纺织服装篇:紧跟时代变迁,抓住消费者需求(可编辑)
中国纺织服装市场发展分析与投资价值评估报告(2014-2019)
浅谈中国纺织服装贸易面临的困难与挑战
2012-01929_201201纺织服装出口(沪关74期)(可编辑)
汗渍色牢度测试仪实验步骤
国内外纺织标准的发展
2013服装执行标准与安全技术要求
[宝典]常用纺织服装标准
年产100万条工艺被技改项目专项资金申请报告
1,25-(OH)_2D_3治疗肺纤维化的动物实验
?纺织援疆助推纺织服装产业大发展
2013-2018年我国休闲服装市场发展趋势及销售投资竞争力
浅析当代大学生服装消费心理以及品牌定位策略
迅速发展的泰国纺织服装业
G238BB 摩擦色牢度仪实验方法
21世纪丝绸印染技术开发展望

网友关注视频

外研版英语七年级下册module3 unit1第二课时
19 爱护鸟类_第一课时(二等奖)(桂美版二年级下册)_T502436
【部编】人教版语文七年级下册《过松源晨炊漆公店(其五)》优质课教学视频+PPT课件+教案,江苏省
第8课 对称剪纸_第一课时(二等奖)(沪书画版二年级上册)_T3784187
每天日常投篮练习第一天森哥打卡上脚 Nike PG 2 如何调整运球跳投手感?
沪教版牛津小学英语(深圳用) 四年级下册 Unit 3
19 爱护鸟类_第一课时(二等奖)(桂美版二年级下册)_T3763925
三年级英语单词记忆下册(沪教版)第一二单元复习
沪教版牛津小学英语(深圳用) 四年级下册 Unit 2
人教版历史八年级下册第一课《中华人民共和国成立》
北师大版数学四年级下册第三单元第四节街心广场
【部编】人教版语文七年级下册《泊秦淮》优质课教学视频+PPT课件+教案,湖北省
外研版英语三起6年级下册(14版)Module3 Unit1
河南省名校课堂七年级下册英语第一课(2020年2月10日)
外研版英语三起6年级下册(14版)Module3 Unit2
冀教版小学英语五年级下册lesson2教学视频(2)
沪教版八年级下册数学练习册21.4(1)无理方程P18
外研版英语三起5年级下册(14版)Module3 Unit2
冀教版小学数学二年级下册第二单元《有余数除法的简单应用》
青岛版教材五年级下册第四单元(走进军营——方向与位置)用数对确定位置(一等奖)
第4章 幂函数、指数函数和对数函数(下)_六 指数方程和对数方程_4.7 简单的指数方程_第一课时(沪教版高一下册)_T1566237
第19课 我喜欢的鸟_第一课时(二等奖)(人美杨永善版二年级下册)_T644386
沪教版牛津小学英语(深圳用) 四年级下册 Unit 12
第12章 圆锥曲线_12.7 抛物线的标准方程_第一课时(特等奖)(沪教版高二下册)_T274713
冀教版小学数学二年级下册第二单元《租船问题》
第五单元 民族艺术的瑰宝_15. 多姿多彩的民族服饰_第二课时(市一等奖)(岭南版六年级上册)_T129830
【部编】人教版语文七年级下册《泊秦淮》优质课教学视频+PPT课件+教案,广东省
七年级英语下册 上海牛津版 Unit5
8.练习八_第一课时(特等奖)(苏教版三年级上册)_T142692
六年级英语下册上海牛津版教材讲解 U1单词