新闻中心

你的位置:云开全站app网页版官方入口 > 新闻中心 > kaiyun  黄仁勋:  Well-云开全站app网页版官方入口

kaiyun  黄仁勋:  Well-云开全站app网页版官方入口

发布日期:2025-03-04 05:37    点击次数:146
起原:三言Pro 三言Pro讯息 本年1月底,DeepSeek发布的R1模子对通盘这个词科技圈酿成了宏大震憾,英伟达更是应声下落16.79%,市值挥发5900亿好意思元,创下好意思国金融史纪录。 英伟达发言东谈主其时暗示:“DeepSeek是一项出色的东谈主工智能跨越,亦然测试时候缩放的完好例子。” 尽管英伟达也曾回血,不外其CEO黄仁勋一直未公开修起此事。 周四,黄仁勋在一场访谈中初次修起了DeepSeek,他暗示投资者对DeepSeek 在东谈主工智能限制获取的发扬存在扭曲,这导致了市集对...

kaiyun  黄仁勋:  Well-云开全站app网页版官方入口

  起原:三言Pro

  三言Pro讯息 本年1月底,DeepSeek发布的R1模子对通盘这个词科技圈酿成了宏大震憾,英伟达更是应声下落16.79%,市值挥发5900亿好意思元,创下好意思国金融史纪录。

  英伟达发言东谈主其时暗示:“DeepSeek是一项出色的东谈主工智能跨越,亦然测试时候缩放的完好例子。”

  尽管英伟达也曾回血,不外其CEO黄仁勋一直未公开修起此事。

  周四,黄仁勋在一场访谈中初次修起了DeepSeek,他暗示投资者对DeepSeek 在东谈主工智能限制获取的发扬存在扭曲,这导致了市集对英伟达股票的失误响应。

  DeepSeek以低老本高性能激勉关注后,投资者启动质疑科技公司过问多数老本确立AI基础设的必要性。

  黄仁勋暗示,市集的剧烈响应源于投资者的误读。尽管 R1 的征战似乎减少了对算力的依赖,但东谈主工智能行业仍需强盛的算力来复古模子后查考处理步调,这些步调能让AI模子在后查考进行推理或预料。

  “从投资者的角度来看,他们合计天下分为预查考和推理两个阶段,而推理即是向 AI 发问独立即得到谜底。我不知谈这种扭曲是谁酿成的,但昭彰这种不雅念是失误的。”

  黄仁勋指出,预查考仍然伏击,但后处理才是“智能最伏击的部分”,亦然“学习处理问题的关节门径”。

  此外,黄仁勋还合计R1开源后,群众边界内展现出的关注令东谈主难以置信,“这是一件极其令东谈主喜跃的事情”。

  黄仁勋访谈主要门径实录:

  黄仁勋:

  What‘s really exciting and you probably saw,what happened with DeepSeek.

  The world‘s first reasoning model that’s open sourced,and it is so incredibly exciting the energy around the world as a result of R1 becoming open sourced,incredible.

  简直令东谈主喜跃的是,你可能也曾看到了,DeepSeek发生了什么。天下上第一个开源的推理模子,这太弗成念念议了,因为R1变成了开源的,群众齐因此而充满了能量,真实弗成念念议。

  拜访者:

  Why do people think this could be a bad thing?I think it‘s a wonderful thing.

  为什么东谈主们合计这可能是一件赖事呢?我合计这是一件好意思好的事情。

  黄仁勋:

  Well,first of all,I think from an investor from an investor perspective,there was a mental model that,the world was pretraining, and then inference.And inference was,you ask an AI question and it instantly gives you an answer,one shot answer.

  I don‘t know whose fault it is,but obviously that paradigm is wrong.The paradigm is pre training,because we want to have foundation you need to have a basic level of foundational understanding of information.In order to do the second part which is post training.So pretraining is continue to be rigorous.

  The second part of it and this is the most important part actually of intelligence is we call post training,but this is where you learn to solve problems.You have foundational information.You understand how vocabulary works and syntax work and grammar works,and you understand how basic mathematics work,and so you take this foundational knowledge you now have to apply it to solve problems.

  领先,我合计从投资者的角度来看,以前存在一种念念维模子是,天下是事先查考好的,然后是推理。推理即是你问AI一个问题,它立即给你一个谜底,一次性回答。我不知谈这是谁的错,但昭彰这种步地是失误的。

  正确的步地应该是先进行预查考,因为咱们想要有一个基础,你需要对信息有一个基本的连合水平,以便进行第二个部分,也即是后期查考。是以预查考要赓续保持严谨。第二部分施行上是智能最伏击的部分,咱们称之为后查考,但这是你学习处理问题的场地,你也曾掌抓了基础常识,你分解词汇是奈何责任的,句法是奈何责任的,语法是奈何责任的,你分解了基本数学是奈何责任的,是以你目下必须期骗这些基础常识来处理施行问题……

  So there‘s a whole bunch of different learning paradigms that are associated with post training,and in this paradigm,the technology has evolved tremendously in the last 5 years and computing needs is intensive.And so people thought that oh my gosh,pretraining is a lot less,they forgot that post training is really quite intense.

  因而后查考有一系列许多不同的学习步地,在这种步地下,时刻在以前五年里获取了宏大的跨越,规画需求相配大,是以东谈主们合计,哦天那,预查考要少得多。可是他们健忘了后查考其实特殊大。

  And then now the 3rd scaling law is ,the more reasoning that you do,the more thinking that you do before you answer a question.And so reasoning is a fairly compute intensive part of.And so I think the market responded to R1 as ‘oh my gosh AI is finished’,you know it dropped out of the sky ,we don‘t need to do any computing anymore.It’s exactly the opposite.

  目下第三条缩放定律是,你作念的推理越多,你在回答问题之前念念考得越多,推理就会越好,这是一个规画量特殊大的经由。因此我合计市集对R1的响应是“哦我的天哪,AI到头了“,就概况它从天而下,咱们不再需要进行任何规画了,但施行上十足相背。

海量资讯、精确解读,尽在新浪财经APP

包袱裁剪:石秀珍 SF183kaiyun



上一篇:kaiyun.com于是征召他入朝为司徒-云开全站app网页版官方入口
下一篇:kaiyun.com试图完了一项矿产左券-云开全站app网页版官方入口
TOP