收藏本站 All life is a game of luck.生活本来就全靠运气。——《Titanic》

树屋经典影视论坛

 找回密码
 立即注册
您可以使用eMule或eMule Mod(参见eMuleFans.com的Mod页emule-mods.de的Mod页)(Windows)、aMule(Win、Linux、Mac)等软件下载eD2k链接。可以参考这里的修复、关联eD2k协议链接方法
eMule收藏集(.emulecollection)文件是您选中的所有链接的列表文件。eMule可以直接下载它们。
按住SHIFT键选择可以选中多个选择框。
可用文件名和大小选择器来选择文件。
查看eD2k Link Selector php类主页可以下载此php类或联系作者。
查看eD2k Link Selector WordPress 插件主页可以下载WordPress插件。
文件名选择器帮您根据文件名称或后缀来选择文件。不分大小写。
符号使用:
和:空格( )、+
不包含:-
或:|
转义:一对英文引号("");
匹配开头:^
匹配结尾:$
例如:
选中所有名称中包含有“eMule”或“0.49c”字眼,但不包含有“exe”字眼的:emule|0.49c -exe
选中所有名称的开头是“eMule”,结尾是“0.49c”的:^emule 0.49c$
选中所有名称中带有“eMule 0.49c”的(必须是“eMule 0.49c”,中间没有别的字符,不能是“eMule fake 0.49c”),需要转义:"emule 0.49c"
大小选择器帮您根据文件大小选择文件。
查看: 2648|回复: 0

第八十期杂志——《匹敌人脑的计算机》

[复制链接]
  • TA的每日心情
    开心
    2016-4-30 15:04
  • 签到天数: 1 天

    连续签到: 1 天

    [LV.1]初来乍到

    发表于 2018-10-24 15:29:11 | 显示全部楼层 |阅读模式

    登录以后才能看到帖子详情哦!

    您需要 登录 才可以下载或查看,没有账号?立即注册

    ×
    本帖最后由 小山林卡 于 2018-10-24 15:29 编辑
    A COMPUTER TORIVAL THE BRAIN
    匹敌人脑的计算机

    凯利•克兰西

    February 15, 2017
    2017年2月15日

    neuromorphic_B_wp.jpg


    file:///C:/Users/Joyce/AppData/Local/Temp/msohtmlclip1/01/clip_image002.jpg
    Artificial intelligence has achieved much ofits recent success by mimicking biology. Now it must go further.
    ILLUSTRATION BY GORAN FACTORY

    人工智能目前取得的成绩大多归功于对生物的模仿,如今它必须要更上一层楼了。
    戈兰工厂 绘


    More than two hundred years ago, aFrench weaver named Joseph Jacquard invented a mechanism that greatlysimplified textile production. His designreplaced the lowly draw boy—the young apprentice who meticulously chose whichthreads to feed into the loom to create a particular pattern—with a series ofpaper punch cards, which had holes dictating the lay of each stitch. The devicewas so successful that it was repurposed in the first interfaces between humansand computers; for much of the twentieth century, programmers laid out theircode like weavers, using a lattice of punched holes. The cards themselves werefussy and fragile. Ethereal information was at the mercy of its papersubstrate, coded in a language only experts could understand. But successivecomputer interfaces became more natural, more flexible. Immutable programinstructions were softened to “If x, then y. When a, try b.” Now, long after Jacquard’s invention, we simplyask Amazon’s Echo to start a pot of coffee, or Apple’s Siri to find the closestcar wash. In order to make our interactions with machines more natural, we’velearned to model them after ourselves.

    两百多年前,一位名叫约瑟夫·雅卡尔的法国纺织工人发明了一种由一连串穿孔卡片组成的机器装置,卡片上的孔决定每一针的位置,这极大地简化了纺织生产,并取代了低效率的男织工——这些年轻的学徒们曾为了织出一种特定的图样精心挑选纺线,再将其穿过织布机。这台装置的成功使得它随后被改用在首次人机交互中;二十世纪绝大部分时间里,程序员像织布工那样用孔洞网格来排列他们的代码。这些易碎的卡片需要小心翼翼地对待。无形的信息受纸板支配,被编写成只有专家才懂的代码。但新一代计算机界面变得更加自然而灵活。不可变的程序指令也改进为“如果发生x,那么执行y。当发生a时,尝试b。”在雅卡尔的发明出现很久之后的今天,我们只要说话就能让亚马逊的Echo智能音箱开始煮一壶咖啡,或者让苹果手机的Siri语音助手找到最近的洗车地点。为了使人机交互更加自然,我们学会了用人类语言来塑造它们。


    Early inthe history of artificial intelligence, researchers came up against what isreferred to as Moravec’s paradox: tasks that seem laborious to us (arithmetic,for example) are easy for a computer, whereas those that seem easy to us (likepicking out a friend’s voice in a noisy bar) have been the hardest for A.I. tomaster. It is not profoundly challenging to design a computer that can beat ahuman at a rule-based game like chess; a logical machine does logic well. Butengineers have yet to build a robot that can hopscotch. The Austrian roboticistHans Moravec theorized that this might have something to do with evolution.Since higher reasoning has only recently evolved—perhaps within the lasthundred thousand years—it hasn’t had time to become optimized in humans the waythat locomotion or vision has. The things we do best are largely unconscious, codedin circuits so ancient that their calculations don’t percolate up to ourexperience. But because logic was the first form of biological reasoning thatwe could perceive, our thinking machines were, by necessity, logic-based.

    在人工智能(以下简称A.I.)发展历史的早期,研究者们遇到了称为莫拉维克悖论的难题:那些对我们来说很费劲的任务(比如算术)对电脑来说轻而易举,然而那些对我们来说很简单的事(比如在吵闹的酒吧里识别出朋友的声音),对A.I.来说却极难掌握。想要设计出能够在依据规则的比赛中(如国际象棋)打败人类的电脑并不是很难;逻辑计算机可以将逻辑方面的问题处理得很好。但是,工程师们至今尚未做出一个能玩跳房子游戏的机器人。奥地利的机器人专家汉斯·莫拉维克推测这也许和进化有关。由于高级推理能力是最近才进化的——也许是在上个十万年内——它没有时间像我们所具有的移动能力和视力那样进化到最优。我们做得最好的事情大部分很早就在脑回路中编了码,是无意识的,计算还没有渗透到我们的经验层面。但因为逻辑是第一种我们可以感知的生物式推理,我们的思考机器必是建立在逻辑之上的。


    Computers are often likened to brains, but they work in a mannerforeign to biology. The computing architecture still in use today was firstdescribed by the mathematician John von Neumann and his colleagues in 1945. Amodern laptop is conceptually identical to the punch-card behemoths of thepast, although engineers have traded paper for a purely electric stream ofon-off signals. In a von Neumann machine, all data-crunching happens in the centralprocessing unit (C.P.U.). Program instructions, then data, flow from thecomputer’s memory to its C.P.U. in an orderly series of zeroes and ones, muchlike a stack of punch cards shuffling through. Although multicore computersallow some processing to occur in parallel, their efficacy is limited: softwareengineers must painstakingly choreograph these streams of information to avoidcatastrophic system errors. In the brain, by contrast, data run simultaneouslythrough billions of parallel processors—that is, our neurons. Like computers,they communicate in a binary language of electrical spikes. The difference isthat each neuron is pre-programmed, whether through genetic patterning orlearned associations, to share its computations directly with the propertargets. Processing unfolds organically, without the need for a C.P.U.

    电脑经常被拿来与人脑相比较,但它们的运作从某种程度上讲却和生物学无关;今天人们仍然在用的计算体系是于1945年首次由数学家约翰·冯·诺伊曼和他的同事们提出的。现在的手提电脑在概念上等同于过去的大型穿孔卡,只不过工程师们用纯粹的开关信号电流取代了纸张。在冯·诺伊曼机中,所有数据处理都在中央处理器(以下简称C.P.U.)中运行。程式指令,接着是数据,以一系列有序的0和1的形式,从电脑内存流至C.P.U.,就像一叠穿梭的穿孔卡片。尽管多核电脑可以进行一些并行处理,但它们的效能是有限的:软件工程师必须费力地编辑这些信息流以避免灾难性的系统错误发生。相反,人脑中数据同时在数十亿个并行处理器中运行——这就是我们的神经元。计算机是用电脉冲的二进制语言进行交流的。其区别在于,我们的每一个神经元都是预先编程的,无论是通过遗传还是后天学习,都会直接将计算结果分享给合适的目标。不需要C.P.U.,处理也能有序地进行。


    Consider vision. We sense the worldwith an array of millions of photoreceptors, each of which plays a small andspecific role in representing an image with neural activity. These cells shuttle the representation through a hierarchy ofbrain areas, progressively forming the conscious percept of sight. A vonNeumann computer would have to stream that same amount of data, plus theinstructions to process it, through a single logical core. And though acomputer’s circuits move data much faster than the brain’s synapses, theyconsume a large amount of energy in doing so. In 1990, the legendary Caltechengineer Carver Mead correctly predicted that our present-day computers woulduse ten million times more energy for a single instruction than the brain usesfor a synaptic activation.

    想想看视力。我们通过数百万光感受器来感知这个世界,每一个光感受器都扮演着一个小而特定的角色,用神经活动表现图像。这些细胞将图像送到一个大脑区域层,逐渐形成对图像的意识认知。冯·诺伊曼机只能通过一个单一逻辑核心传输相同量的数据,并加上指令来处理它。尽管一台计算机的电路数据传输速度比大脑突触更快,但这样需要消耗大量的能量。1990年,加州理工学院的传奇工程师卡弗·米德准确地预测到,我们现在用的计算机执行一个简单的指令消耗的能量是大脑激活突触的一千万倍。


    A.I. owes much of its recent success to biological metaphors. Deep learning, for example, which underlies technologies from Siri to GoogleTranslate, uses several interconnected processing layers, modelled after theneuronal strata that compose the cortex. Still, given that even the mostadvanced neural networks are run on von Neumann machines, they are computationallyintensive and energy-greedy. Last March, AlphaGo, a program created by GoogleDeepMind, was able to beat a world-champion human player of Go, but only after it hadtrained on a database of thirty million moves, running on approximately amillion watts. (Its opponent’s brain, by contrast, would have been about fiftythousand times more energy-thrifty, consuming twenty watts.) Likewise, severalyears ago, Google’s brain simulator taught itself to identify cats in YouTubevideos using sixteen thousand core processors and all the wattage that camewith them. Now companies want to endow our personal devices with intelligence,to let our smartphones recognize our family members, anticipate our moods, andsuggest adjustments to our medications. To do so, A.I. will need to move beyondalgorithms run on supercomputers and become embodied in silico.

    A.I.近来取得的成绩大多要归功于对生物的模仿。例如,深度学习,作为从Siri语音助手到谷歌翻译的技术基础,要用到许多相互连接的处理层,它是仿组成大脑皮层的神经元层设计的。但就算是最先进的神经网络也仍是在冯·诺伊曼机上运作,它们计算量大且耗能。去年三月,由谷歌旗下的DeepMind公司研发的阿尔法狗(ALphaGO)打败了围棋世界冠军,但它此前已在数据库中练习三千万步,运作起来大概要消耗一百万瓦特。(相反,它对手的大脑耗能大约是其五万分之一,只需消耗二十瓦特。)同样地,几年前,谷歌的大脑模拟器自己学习识别YouTube视频里的猫,这要用到一万六千个核心处理器并消耗相应的功率。如今,厂商想让我们的个人设备智能化,使我们的智能手机能够识别家庭成员,预测心情,并为用药调整提供建议。为了做到这些,A.I.需要超越超级计算机的算法,成为计算机的一部分。


    Building on decades of work by Mead and others, engineers havebeen racing to roll out the first so-called neuromorphic chips for consumeruse. Kwabena Boahen’s research group at Stanford unveiled its low-powerNeurogrid chip in 2014, and Qualcomm has announced that its brain-inspiredZeroth processor will reach the market in 2018. Another model, I.B.M.’sTrueNorth, only recently moved from digitalprototype to usable product. It consists of a million silicon neurons, tinycores that communicate directly with one another using synapse-likeconnections. Here, the medium is the message; each neuron is both program andprocessing unit. The sensory data that the chip receives, rather than marchingalong single file, fan out through its synaptic networks. TrueNorth ultimatelyarrives at a decision—say, classifying the emotional timbre of its user’svoice—by group vote, as a choir of individual singers mightstrike on a harmony. I.B.M. claims the chip is useful in real-time patternrecognition, as for speech processing or image classification. But the biggestadvance is its energy efficiency: it uses twenty milliwatts per squarecentimetre, more than a thousand times less than a traditional chip.

    在米德和其他工程师们数十年努力的基础上,大家争相推出第一款所谓神经形态芯片的产品供消费者使用。夸贝纳·波尔汉的研究团队于2014年在斯坦福大学揭开了低功率的Neurogrid芯片的面纱,美国高通公司宣布它的类脑处理器Zeroth将于2018年面市。另一款由国际商业机器公司(以下简称I.B.M.公司)开发的TrueNorth芯片,最近才从数字样机变成实体产品。它包含一百万个硅神经元,极小的核心可以像突触接触那样直接相互交流。在这里,媒介即讯息;每个神经元都既是程序也是处理单元。芯片收集的知觉数据,是通过突触网展开的,而不是单列行进的。TrueNorth最终通过小组投票作出决策,(比如将用户声音中的情感色彩分门别类),一个由独立歌手组成的合唱团也许会创造出和声。I.B.M.公司称就语音处理或者图像分类方面来说,这个芯片在实时模式识别中非常有用。但它最大的优势是节能,每平方厘米只需消耗二十毫瓦特,是传统芯片的不到千分之一。


    TrueNorth was also designed to emulate some of the brain’smessiness. For the past several billion years, life has had to learn to make dowith its own imperfect corporeity—fuzzy eyesight, limited hearing, and so on.Despite sensing the world through a scrim of unpredictable molecularinteractions, though, organisms tend to get around with remarkable accuracy.What seems like a bug may be, mathematically speaking, a feature. Randomnessturns out to add a great deal of computational power to probabilistic algorithmslike the ones underlying modern A.I.; input noise can shake up their output,preventing them from getting stuck on bad solutions. TrueNorth creates its ownsort of fuzziness by including a random-number generator with each neuron.I.B.M. is developing another chip that achieves the same goal more elegantly,using a material that changes phase from amorphous to crystalline with acertain degree of randomness. And this is the crux of the conceptual shift thatis taking place in computing: increasingly, engineers will exploit thecomputational properties of matter rather than guarding against its inherentfallibility, as they had to do with the punch cards. Matter will not execute acomputation; it will be the computation.

    TrueNorth设计时还模仿了大脑的一些混乱状态。因为在过去的数十亿年中,生命不得不学习设法应对本身有缺陷的体质——模糊的视力,有限的听力等等。尽管生物体是透过分子间难以预测的作用力这层面纱来感知世界,它们的感知却意外地准确。从数学角度上讲,那些看起来像是程序故障的,可能是一种特征。事实证明,随机性极大地提高了概率算法的计算能力,现在A.I.正在使用的就是例子;输入噪音可以重新组合输出,防止它们卡在不良解决方案上。TrueNorth通过给每个神经元配置一个随机数生成器,创造了自身的随机性。I.B.M.公司正在研发另一种能够更好地达到相同目标的芯片,芯片使用的材料以一定程度的随机性由非晶态转变为结晶态。这种关键的概念正在计算机行业中发生转变,工程师们将会越来越多地开发材料的计算性能,而不是防范它固有的易误性,那是穿孔卡片时代不得不做的。材料将不会执行计算,它会变成计算本身。


    Given the utter lack of consensus onhow the brain actually works, these designs are more or less cartoons of whatneuroscientists think might be happening. But, even if theydon’t reflect absolute biological reality, the recent success of A.I. suggeststhat they are useful cartoons. Indeed, they may eventually confirm or challengeour understanding of the brain; as the physicist Richard Feynman put it, “What I cannotcreate, I do not understand.” Or perhaps theirpower lies in their simplicity. Eve Marder, a neuroscientist at BrandeisUniversity, has argued that the more details we include in our models, the morewrong we may make them—such is the complexity of neurobiology and the depth ofour ignorance. Strict fidelity may not be necessary in designing practical A.I.TrueNorth, for instance, can’t learn on its own. The chip has to be optimizedfor a particular task using A.I. run on a conventional computer. So, thoughTrueNorth maintains one part of the biological metaphor, it does so at the costof another. And perhaps there’s nothing wrong with that. Who is to say thatevery feature of the brain is worth mimicking? Our own human algorithms are notnecessarily ideal. As Darwin demonstrated, evolution is not an unremitting racetoward perfection. It is a haphazard wander around good enough.

    由于大脑究竟如何运作这一问题缺乏完全一致的意见,这些设计或多或少是神经科学家们认为大脑可能运作方式的卡通版本。但就算它们所反映的并不是绝对的生物学事实,A.I.近年来取得的成功却表明这些卡通版本很有用。事实上,它们最终可能会验证或推翻我们对大脑的认识;正如物理学家理查德·费曼所说,“我不能创造的,我也不理解。”或者可能它们的成功就在于简单性。布兰迪斯大学的神经科学家,伊夫·马德尔辩称,我们建立越多的细节,就可能犯越多的错误——这就是神经生物学的复杂性与我们无知的程度。在实际设计A.I.时可能没有必要太过逼真。例如,TrueNorth不能自主学习。在传统电脑上运用A.I.完成一项特定任务时,芯片必须优化。所以,尽管TrueNorth保留了一部分生物特征,却会因此牺牲掉另一部分。这也许并没有什么错。谁能说大脑的每一个特征都值得被模仿呢?我们人类自己的算法也不一定是完美的。正如达尔文所论证的那样,进化并不是永无止境地趋近完美,而是在“够好即可”附近徘徊。


    Kelly Clancy isa neuroscientist based in Basel, Switzerland.
    凯利•克兰西是一位神经科学家,现居瑞士巴塞尔。

    文章来源:





    翻译 By Viola
    校对 By Lynette
    终校 By 熋
    树屋字幕组-文翻组
    翻译仅供学习交流,严禁用于商业用途




    树屋微博@树屋字幕组 其他发布站点:ed2000和No视频 其他网站上传内容均属站方行为,与字幕组无关!
    回复

    使用道具 举报

    您需要登录后才可以回帖 登录 | 立即注册

    本版积分规则



    手机版|小黑屋|联系我们|加入我们| ( 蜀ICP备1600436号 )|人工智能

    !rsf_gtt_lan!
    x

    微信扫码关注
    更新提醒 丰富内容
    一网打尽!

     

    GMT+8, 2024-11-21 17:10 , Processed in 0.236073 second(s), 40 queries .

    Powered by Discuz! X3.5

    © 2001-2013 Comsenz Inc.

    快速回复 返回顶部 返回列表