EETimes On Air(EETimes全球联播)是ASPENCORE专门为电子业界人士提供的15-30分钟电子行业新闻播报节目，由全球EETimes媒体记者、编辑、分析师与业界专家的访谈录制而成。EETimes On Air最近被知名社交媒体评论站点Feedspot评选为10大电子科技播客第一名，因电子和半导体行业专业新闻评论及高质量的音频节目而受到众多专业人士的喜爱。
BRIAN SANTO: I’m Brian Santo, EE TimesEditor in Chief, and you're listening to EE Times on Air. This is your Briefingfor the week ending August 2nd.
BRIAN SANTO：我是EE Times总编Brian Santo，你正在收听的是EE Times全球联播。这是截至8月2日的一周新闻概要。
We want the Internet of things to be smart,but being smart requires processing power – which will be lacking in millionsof IoT devices. It’s what we call in the business “a conundrum.” But – theremay be an answer! You’ll hear what that is.
As we reported last week, the biggestcompanies in the world are beginning to compete with their own chip suppliers.The latest example is Alibaba, which just released a high-performance processorof its own design. Alibaba’s move is significant for technological, financial,and political reasons. We’ll look into that.
Also, you’d think that the people buildingautonomous vehicles are using sound design principles.
JUNKO YOSHIDA: Those with an IT backgroundwho have grown up in the culture of they must “move fast and break things”don’t necessarily do that. They tend to go for alternate approaches.
BRIAN SANTO: “Alternate approaches.” Youare going to want to hear the rest of this. We’ll get to that in a minute.
First – the Internet of things is going tolead to a world that is smarter. We’ll be installing sensors farther andfarther away from data centers – along highways to make driving safer, intofarm fields to monitor how our food grows, into remote areas to track weatherpatterns, and much, much more. Adding intelligence has always meant adding moreprocessing, which also means drawing more power – but the vast majority of thedevices we install in these remote areas – at the farthest edges of the network– will, by necessity, lack sophisticated processing capabilities and will bevery low-powered. How to reconcile that?
首先 - 物联网将走向一个更智能的世界。我们将会在距离数据中心越来越远的地方安装传感器 - 沿着高速公路使驾驶更安全、进入农田以监控食物生长、进入偏远地区以跟踪天气变化模式等等。增加智能总是意味着增加更多的处理能力，这也意味着增加功耗，但我们在这些偏远地区安装的绝大多数设备都是在网络的最边缘，一般都缺乏复杂的处理能力，并且功耗非常低。如何调和这一矛盾？
Sally Ward-Foxton is one of ourcorrespondents in London. She keeps on top of trends in artificial intelligencefor EE Times. In a recent story, Sally wrote about a group of researcherslooking into ways to distribute AI at the network edge. They call theirapproach to machine learning “TinyML.” International correspondent Junko Yoshidacaught up with Sally.
SallyWard-Foxton是我们驻伦敦记者，她为EE Times把守人工智能趋势的最前沿。在最近的一篇报道中，莎莉介绍了一组研究人员如何在网络边缘配置人工智能。他们将自己的机器学习方法称为“TinyML”。国际通讯记者Junko Yoshida采访了Sally。
JUNKO YOSHIDA: Let's go back to the basicshere. I want you to explain what's TinyML, and what is it for?
SALLY WARD-FOXTON: So TinyML stands forTiny Machine Learning. Not just for Edge devices, but for devices at the VeryEdge. So machine leaning's already in Edge devices. If you the Facebook app onyour phone, you're already machine learning inference on your phone. So whatwe're talking about here is machine learning at the Very Edge. So somethinglike ultra-low power sensor nodes, gadgets that use energy harvesting orsituations where there's barely any power available at all.
As far as defining TinyML, in the recentmeeting of the TinyML Group, one of the speakers, Evgeni Gousev from Qualcomm,defined TinyML as "machine approaches that consume less than amilliwatt." "In Qualcomm's experience, he said, "the milliwattreally is the magic number for applications in a smartphone that Acosta's alwayson," so under a milliwatt is what we're aiming for. And there will be awhole ecosystem springing up around this application, but it's really stillemerging right now.
JUNKO YOSHIDA: Right. So we are here talkingabout how best to enable ultra-low power machine learning, not just onsmartphones, but all the way down to the sensor node. So I just want you tobreak it down. Is this a matter of streamlined framework for training to makethis TinyML possible? Is this a matter of framework or some sort of a newtechnique we're talking about here? Or simply a new low-power hardware that weneed?
SALLY WARD-FOXTON: So there are techniquesthat we use today in machine learning for reducing power. We can do things likequantization, where we reduce the precision of the numbers that we use in themodel to make the model more efficient.
In the TinyML meeting, one of the Googleengineers, Nat Jeffries, spoke about cascading models. So instead of runningone large model, he broke it into three smaller models. So say for speechrecognition, the first model might just be deciding whether there's any soundhappening. And if there is, it activates a second model, which decides whetherthat sound is human speech. And then that triggers the rest of the model, whichis more power-hungry and so on.
So only a small low-energy part of themodel is running, unless it's needed. And that can save lots of power.
JUNKO YOSHIDA: So rather than doingeverything in one shot, you are truncating the AI process in several differentparts. Is that it?
SALLY WARD-FOXTON: Exactly, yeah. Kind oflike when we used to talk about ultra-low power microcontrollers and onlywaking up certain parts of the device as they're needed to save power.
JUNKO YOSHIDA: Yup. What about software andhardware? What sort of inventions or new developments or improvements areneeded to make this ultra-low power machine learning possible?
SALLY WARD-FOXTON: So yeah. In terms ofhardware and software solutions, these are definitely still emerging. Google'sworking on building a version of TensorFlow for microcontrollers. There'salready a version called TensorFlow Lite, which is primarily for mobile phones.They're adapting it for microcontrollers.
On the hardware side, there are severalspecialist companies working on ultra-low power accelerator chips. At theTinyML Group meeting, there was a presentation from GreenWaves Technologies,based in France. They've developed an eight-core accelerator that uses RISC-V.They reduced the clutch speed in the core voltage to get it to run on barelyany power.
JUNKO YOSHIDA: Wow! That's interesting. Soin your story, you wrote industry discussions of how to proceed with ultra-lowpower machine learning was overdue. And I couldn't agree with you more with youon that one. But Sally, give me your take: Where do we stand, and who in thehardware and software space are leading in this effort for the ultra-low powermachine learning?
SALLY WARD-FOXTON: I think there'scertainly a feeling that new applications are being held back because thehardware isn't there yet and the software frameworks are not there yet.Google's really taking the lead on this one. They've clearly identified this assomething important that they won't surface you with TensorFlow Lite. And onthe hardware side, I think microcontrollers definitely have the edge at themoment. They are just totally ubiquitous.
All these ultra-low power sensor nodeswe're talking about, they probably have a microcontroller in them already. It'sa mature technology, relative cheap, everybody knows how to use them. AndGoogle is backing microcontrollers as well. So microcontrollers just have amassive advantage, really.
That's not to say that'll always be thecase. Specialized hardware might make some inroads, but overall I think themicrocontroller will be very difficult to unseat from its prime position.
The GreenWave speaker, Martin Kru, he saidthat things are moving so fast that for specialized chip companies, the dangeris they end up being really good accelerating what everyone was doing lastyear, which is obviously not good. So retaining a bit of flexibility for futuremachine learning algorithms, that might be the key there.
BRIAN SANTO: Last week EE Times launched a series of articles – what wecall a Special Project. The series explored the various ways the biggestcompanies in the world are remaking the semiconductor industry. They includeAmazon, Baidu, Google, Facebook, Microsoft.
BRIAN SANTO:上周，EE Times推出了一系列文章——我们称之为特别项目。该系列探讨了世界上那些最大的公司重塑半导体产业的各种方式。这些大公司包括Amazon（亚马逊），Baidu（百度），Google（谷歌），Facebook（脸书），Microsoft（微软）。
Correspondent Nitin Dahad’s contribution to our Hyperscaler SpecialProject was about how the big internet companies are beginning to compete withtheir own IC vendors.
The day we published the package, as if on cue, one of the hyperscalers– Apple – bought Intel’s modem business, an acquisition that will have far-reachingrepercussions through the semiconductor industry. Apple used to be a big modemcustomer of Qualcomm’s; that’s now likely to change.
That same day, another hyperscaler, Alibaba, announced it had designedits own new processor. Here are Nitin and Junko Yoshida again to discuss thevery many ways the new processor is significant.
JUNKO YOSHIDA: Nitin, you were part of thistheme that we at EETimes just launched last week the new Special Report focusedon hyperscalers' impact on the semiconductor industry. Given that internet platformgiants are getting into a host of vertical business segments, which by the wayincludes their own chip designs, how significant do you think Alibaba's moveis? You know, Alibaba's move to design its own chips. Tell me your take.
JUNKO YOSHIDA: Nitin，您参与了EETimes上周刚刚发布的新专题报道，该报道旨在关注超大规模公司对半导体行业的影响。鉴于互联网平台巨头正在进入一系列垂直业务领域，包括自主芯片设计，您认为阿里巴巴这项举措的意义何在？要知道，阿里巴巴正在设计其自己的芯片。请说说你的看法。
NITIN DAHAD: Okay, yes, Junko. So just to recap, as we highlighted inthe Special Report and in EETimes On Air last week, many of the large internetplatform companies-- and these include Facebook, Amazon, Apple, Alibaba andGoogle-- they're increasingly getting impatient with existing roadmaps andtimelines from the semiconductor industry. And going the do-it-yourself route.For all kinds of reasons.
NITIN DAHAD:好的，Junko。正如我们在上周的特别报道和EETimes全球联播中所强调的那样，许多大型互联网平台公司—— 包括脸书、亚马逊、苹果、阿里巴巴和谷歌——他们对半导体行业现有的发展规划和开发进度越来越没耐心。所以出于多种原因，干脆自己动手了。
Alibaba's move to design its own chip ispart of this trend. And I think you'll probably talk about this a bit later.It's also strategic. It's also significant for China, since it address thecountry's ambition to be more self-sufficient in semiconductors as part of theMade in China 2025 initiative. So in effect, this is symbolic both for Chinaand for RISC-V.
JUNKO YOSHIDA: Got it. Actually, as I briefly mentioned before, overthis past weekend, I had an opportunity to quickly catch up with Xiaoning Qi. He'sthe Vice President of Alibaba Group. He was previously the CEO of C-Sky, whodeveloped their own homegrown 32-bit microprocessor for the embedded market.
So Xiaoning's team has chops to do various chips, but what they'redoing now under the umbrella of Alibaba is quite interest to me, and when Italked to him, he said, You know, Alibaba's chip group doesn't plan to sell thenewly designed RISC-V chip. Rather, it tends to offer what he called "chiptemplates" for other companies.
So my question to you is, what is the performance of this RISC-V chip,and what are the target markets for this?
NITIN DAHAD: It's actually absolutely right what he says. What they'regoing to do is, they're going to sell their own chip platform 4 and releaseparts of the code as open source on GitHub to stimulate related development. Soreally, this is an enabler or RISC-V development in China. As you say, they'renot trying to sell their own chips.
And as regards performance, Alibaba claims a major breakthrough withwhat they call a Xuantie 910 chip, which they released last Thursday. It's saidit's 40% more powerful than any other RISC-V processor to date.
JUNKO YOSHIDA: Wow.
NITIN DAHAD: Just one stat, and you can read the rest in the article,but it achieves 7.1 coremark/megahertz at a frequency of a 2 and a halfgigahertz on a 12 nanometer process node. What they're doing is they'retargeting really high performance applications, both in infrastructure and atthe Edge. So artificial intelligence, internet of things, 5G and autonomousvehicles. And they said this specifically in the announcement. They're sayingthe whole, for example, artificial intelligence IoT market is fragmented andthere's no universal chip solution. What they're trying to do is enable thedevelopment through RISC-V and open source, but also get that high performance.
NITIN DAHAD:这还只是一个指标，你可以阅读文章中的其余部分，它采用12纳米工艺，以2.5 GHz的频率达到7.1 coremark/megahertz。他们的目标其实是高性能应用，物理是基础架构还是边缘端，涵盖AI、物联网、5G和自动驾驶。他们在官宣中特别说明了这一点。他们说想覆盖全部领域，例如，AI物联网市场是碎片化的，也没有现行的通用芯片解决方案。他们想要做的是通过RISC-V和开源来开发高性能的芯片。
JUNKO YOSHIDA: Wow. That is very ambitious. But you know, I'm notsurprised when I think about how strategic Alibaba's decision was back a coupleof years ago, when Alibaba decided to pick up C-Sky as part of the Group. And Ithink from the get-go they did have the intention to get into the semiconductorbusiness. But I think throwing RISC-V in, it kind of changed the game a littlebit here. Especially in the context of China you briefly mentioned before.
JUNKO YOSHIDA:哇， 野心可不小啊。但是你知道，当我回想几年前，阿里巴巴决定收购中天微作为集团的一部分时，他们的决定是多么具有战略性，所以现在我并不感到惊讶。我认为从一开始，他们就有意进入半导体行业。但是我认为抛出RISC-V，这有点改变了游戏规则。特别是你之前有简要提及，牵涉到中国背景的情况下。
So do you think this illustrates China making deeper inroads in RISC-V,Nitin?
NITIN DAHAD: Yes, Junko. It's definitely that. But it's beyond this aswell. And to look at it in more context, China's semiconductor and consumerelectronics industries really needed some kind of boost following the ongoingtrade war with the US. First we had the sanctions against (Setti E?), and thenthe current ongoing saga with banning Hauwei... well, more correctly, puttingit on the entity list.
So more specifically related to RISC-V, this is huge news. A majorplayer in China has developed a homegrown, high-performance, 16-core RISC-Vchip 40% more powerful than any others to date. This is hinting at both theleadership position in RISC-V, as well as-- and this is the important bit--less reliance on access to chips and other RISC-V architectures developed by,say, US and European companies.
As one analyst said to me last week... Sorry, not to me. He said, Withthis chip, Chinese companies don't have to rely on a supplier like ARM orIntel, there's now no threat ever of them losing access to a key part of theirdesign.
BRIAN SANTO: For months, Junko has beeninvestigating autonomous vehicles and vehicle safety. This week she did anarticle that looked at the fundamentals – the methodologies that various automakers have developed to design safe vehicles, and to validate those designs.
It turns out that some of the new electriccar startups are running thousands of miles of what they call "roadtesting," without ever being on the road – which is a valid approach incombination with real road testing. The problem is that nobody knows if theydesigned their software models correctly. There are no standards. There aren’teven any common metrics.
Worse, it’s all a big secret. Car companiesdon’t share their safety design data with anyone. That means no one can check theirwork, and there’s no way to tell if it’s valid or garbage. Furthermore, carcompanies don’t share their test data either, which means they can’t benefitfrom each others’ safety research. When did safety become a proprietary issue?
It’s all shockingly disorganized, and it’sno wonder that car companies keep pushing back the date when autonomousvehicles will be ready for the road. Cruise had promised to introduce itsrobotaxi service this year. Last week it just finally acknowledged it isn’tgoing to make its deadline. The real wonder is why ANY car company EVERpromised autonomous driving in 2019. What are they thinking?
I asked Junko about that.
So the traditional car companies-- theFords, the BMWs-- and then there are a bunch of new startups-- the Byton andTesla. And the two groups, what we're discovering is that they operatecompletely differently, or very differently. The startups all come out of theelectronics industry, or most of them do. And so they're fast and they'renimble and they're smarter than the traditional car companies because they'refast and nimble in their electronics. And then there's the traditional carcompanies that are big dinosaurs and they're dragging their heels on electricvehicles. And they're slow and they're DOOMED because they just can't keep upwith the startups, right?
JUNKO YOSHIDA: Well, that's the simplisticview of the autonomous vehicle industry, Brian. But sure, there are techcompanies which do nothing but developing AV software stacks, right? And there arecar OEMs who design both: traditional vehicles and autonomous vehicles. And youknow, I think what we need to be aware is that there is a lot of interminglingsgoing on among them. You know, through partnerships and acquisitions like Fordand Volkswagen I think earlier this month, for example, just became equal shareinvestors in a company startup called Argo AI. This is the autonomous vehiclestartup based in Pittsburgh. So there! So there are a lot of partnerships goingon right now.
BRIAN SANTO: It's a lot of technology, it'sa lot of new technology. There's electric vehicles, there's AI, there's justself-driving, there's the... It's a big, big technological set of problems andchallenges that have to be settled. And it doesn't look like any one companycan really take them all on. Not Tesla, not Ford.
JUNKO YOSHIDA: No, it's true. And I thinkwe should be cognizant of this... You know, there are certain culturedifferences or lack of institutional memory on the part of the tech startups.You know, they often lack the disciplines in rigorous design and engineering Ithink.
You know, for example, traditionalaircraft, train, automotive designers first build rigorous mathematical modelsand apply formal verification to validate that a system design matches theoriginal specs.
On the other hand, those with ITbackgrounds who have grown of famous "move fast and break things"don't necessarily do that. They tend to go for alternate approaches.
So listen to what Jack Weast, Intel's VicePresident of Autonomous Vehicle Standards, told us in our recent interview.
JACK WEAST: The alternate approach shouldbe, "Hey, I just started writing code immediately. Didn't do any formaldesign, didn't do any design verification. I've got a pile of code, and I'mjust going to test it and iterate, test, iterate, test, iterate. And then tryto gather statistical evidence to convince me that the thing is safe." Andthat's why I've driven 10 million miles. I've driven a hundred million mileswithout an accident. Okay, that means it's safe, right? Well, I don't know.Because you haven't actually verified that the design is safe. What you've doneis gather statistical evidence that this pile of code that I've got actuallyseems to work, so it tries to give you more confidence, but it's not a soundapproach.
JUNKO YOSHIDA: So such an alternativeapproach is a stock departure from a traditional design process under which youformally define the vehicle architecture and design algorithms on paper first.The important thing here is that you must formally verify them. As Intel'sWeast told us, it's... Take for example an airplane, right? You design anairplane for example, you know it's going to fly-- from a physics standpoint--because you've proven that it will fly. You know, they just don't put theairplane out there. Does it fly? Right? So you can prove that on paper.
BRIAN SANTO: When you're talking abouttrains, planes, cars, you're talking about things that there's a life-criticalelement to it, right? Versus like designing a FitBit or a PC. You can reboot aPC. You can't reboot an airplane, right?
JUNKO YOSHIDA: Yeah, exactly.
BRIAN SANTO: That's kind of like thefundamental thing going on here. So we've had cars for a hundred years, butwhen you add autonomy, it's a different thing all of a sudden. And addingautonomy to a vehicle kind of makes it LESS safe, at least at first, right?
Can I get you to explain why that is?
JUNKO YOSHIDA: Yeah. I guess I have tobreak this down into two parts here. Because on one hand, ADAS, as youmentioned, the Advanced Driver Assistance System, is great. Because a featurelike automatic emergency braking can get you covered to avoid a forward crashwith another vehicle, for example. I mean that's what AEB does. On the otherhand, when autonomy becomes MORE advanced, like Level 3 cars, in which thedriver can take his eyes off, that's when things get complicated. You know,although the Level 3 car is designed to do MOST of the driving, drivers stillrequire... the driver needs to be prepared to intervene when called upon by thevehicle to do so.
BRIAN SANTO: Because the vehicle is goingto get involved with things that it hasn't seen before. It's going to be humanjudgment.
JUNKO YOSHIDA: Right. It if it gets confused,it asks the driver, Hey, take over now, right? But this is a real human-machineinterface issue. It's huge! Because you might have been texting until thatmoment, or you might have been emailing somebody. And then you're suddenlytold, Hey, take over! That's really unrealistic in my point of view, right? Youadd autonomy, more autonomy, human drivers will get used to it, you know? Theyget bored and they can't stay alert all the time. It's human nature. And thatmakes driving highly automated vehicles less safe I think.
BRIAN SANTO: Right, right. So you've got todesign for that in the beginning. So we've been discussing how this all startswith design and test and verification, but that process isn't really all theway through the automotive industry when it comes to autonomy. What do youthink the basic problem is?
JUNKO YOSHIDA: I believe the biggest issueof the autonomous vehicle industry, AV industry, today comes down to one thing:lack of transparency. I'm sure Waymo is learning a ton of stuff while rackingup miles and miles by testing their robocars on public roads, right? So areother AV companies like Uber and others. They must be all individually lookingfor extreme cases that will make the automated vehicles unsafe or ineffective.If that is the case, shouldn't they be pooling that data to design testvalidation?
As one of the astute EE Times readersactually pointed out in our Comments section today, saying this: "What wedon't hear from the AV crowd," he said, "is what would typically becalled a requirements document." You need that requirements document toidentify as many use case and failure points as a requirement. Then researchand design a feature that mitigates that risk of failure mode, right? There isno such document at this point in time.
But first things first. As Phil Koopman,he's the CTO of Edge Case Research, he extraneously told me that at minimum, atMINIMUM, AV companies should be publishing safety metrics to demonstrate thatthey are operating safely before test cars hit the road.
BRIAN SANTO: Are there any basic requirements,basic metrics, that the auto industry has agreed upon?
JUNKO YOSHIDA: Not yet. None. Isn't thatshocking? It's a shocker. It's a real shocker.
BRIAN SANTO: That's not encouraging.
JUNKO YOSHIDA: I know. People say thatthey're working on it, but not at this point in time.
BRIAN SANTO: So it kind of makes sensethen, to me, after hearing that, that the introduction of robotaxis andautonomy in vehicles is getting pushed back. These guys need time to deal withall this stuff.
Now on the other hand-- and I've had thisargument over and over with other people-- if autonomous vehicles are going tobe safer than humans-- even if it's only like 10% safer at the beginning--shouldn't we just force everyone to get those autonomous vehicles out on the road,force everybody into autonomous as soon as possible, and okay maybe the trafficdeath toll goes up maybe like 10% at first. But eventually traffic deaths willget cut maybe in half or even more. Just roll with it! Just get it goingalready!
JUNKO YOSHIDA: Well, that's the crux of theissue, isn't it? Especially in the United States, I think asking drivers togive up driving is like asking people to give up their guns! You can't forceeverybody to switch to autonomous vehicles. Actually, I really hate theargument of "take the human out of the equation." You can never takethe human out of the equation, right?
BRIAN SANTO: No.
JUNKO YOSHIDA: Even driverless cars need todeal with cars driven, regular cars driven by human drivers on the road wherethe human pedestrians cross the streets, right? So you can never take thehumans out of the equation.
BRIAN SANTO: Life is unpredictable. Technologyis really good at predictable stuff, but the default situation of reality is,it's unpredictable.
JUNKO YOSHIDA: Exactly. So as long asthere's human drivers mixed in the roads with automated vehicles, there's goingto be accidents. Period. So I don't want to sound too old fashioned, but whatour cities need really, in my opinion, is that if they're really serious aboutreducing fatalities, what we need is public transportation, not autonomousvehicles.
BRIAN SANTO: Onward into the past, arundown of important events in tech history that took place on dates from thepast week.
On July 29th, in 1958, President Dwight D.Eisenhower signed the National Aeronautics and Space Act. It officially createdNASA.
On July 30th in 1898, the Winton MotorCarriage Company of Cleveland, Ohio, placed an advertisement in ScientificAmerican advising readers to “Dispense with a Horse.” It appears to have beenthe first car ad ever. The vehicle was priced at $1,000, but running it costonly about a quarter of a penny per mile – presumably cheaper than the horse.
On August 1st, in 2016, NHK started regularTV satellite broadcasts of 8K television. No one was selling 8K TV sets at thetime; viewers had to congregate in front of public viewing stations.
Also on August 1st, this time in 1981, MTVsigned on the air, the first 24-hour stereo video channel. The first song everplayed on MTV was this one by The Buggles.
MUSIC CLIP: I heard you on the wirelessback in '52 / Lying awake intent at turning in on you / Oh-a-oh! / I met yourchildren / Oh-a-ah! / What did you tell them? / Video killed the radio star /Video killed the radio star
BRIAN SANTO: That’s your Weekly Briefingfor the week ending August 2nd. This podcast is Produced by AspenCore Studio. Itwas Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The SegmentProducer was Kaitie Huss. The tran of this podcast can be found onEETimes.com, complete with links to the articles we refer to. Be sure to joinus next week for your August 9th Weekly Briefing on EE Times On Air. I’m BrianSanto.返回搜狐，查看更多