Exascale – Why So Hard? | Exascale Day
from around the globe it's thecube with digital coverage of exascale day made possible by hewlett packard enterprise welcome everyone to the cube celebration of exascale day ben bennett is here he's an hpc strategist and evangelist at hewlett-packard enterprise ben welcome good to see you good to see you too son hey well let's evangelize exascale a little bit you know what's exciting you uh in regards to the coming of exoskilled computing um well there's a couple of things really uh for me historically i've worked in super computing for many years and i have seen the coming of several milestones from you know actually i'm old enough to remember gigaflops uh coming through and teraflops and petaflops exascale is has been harder than many of us anticipated many years ago the sheer amount of technology that has been required to deliver machines of this performance has been has been us utterly staggering but the exascale era brings with it real solutions it gives us opportunities to do things that we've not been able to do before if you look at some of the the most powerful computers around today they've they've really helped with um the pandemic kovid but we're still you know orders of magnitude away from being able to design drugs in situ test them in memory and release them to the public you know we still have lots and lots of lab work to do and exascale machines are going to help with that we are going to be able to to do more um which ultimately will will aid humanity and they used to be called the grand challenges and i still think of them as that i still think of these challenges for scientists that exascale class machines will be able to help but also i'm a realist is that in 10 20 30 years time you know i should be able to look back at this hopefully touch wood look back at it and look at much faster machines and say do you remember the days when we thought exascale was faster yeah well you mentioned the pandemic and you know the present united states was tweeting this morning that he was upset that you know the the fda in the u.s is not allowing the the vaccine to proceed as fast as you'd like it in fact it the fda is loosening some of its uh restrictions and i wonder if you know high performance computing in part is helping with the simulations and maybe predicting because a lot of this is about probabilities um and concerns is is is that work that is going on today or are you saying that that exascale actually you know would be what we need to accelerate that what's the role of hpc that you see today in regards to sort of solving for that vaccine and any other sort of pandemic related drugs so so first a disclaimer i am not a geneticist i am not a biochemist um my son is he tries to explain it to me and it tends to go in one ear and out the other um um i just merely build the machines he uses so we're sort of even on that front um if you read if you had read the press there was a lot of people offering up systems and computational resources for scientists a lot of the work that has been done understanding the mechanisms of covid19 um have been you know uncovered by the use of very very powerful computers would exascale have helped well clearly the faster the computers the more simulations we can do i think if you look back historically no vaccine has come to fruition as fast ever under modern rules okay admittedly the first vaccine was you know edward jenner sat quietly um you know smearing a few people and hoping it worked um i think we're slightly beyond that the fda has rules and regulations for a reason and we you don't have to go back far in our history to understand the nature of uh drugs that work for 99 of the population you know and i think exascale widely available exoscale and much faster computers are going to assist with that imagine having a genetic map of very large numbers of people on the earth and being able to test your drug against that breadth of person and you know that 99 of the time it works fine under fda rules you could never sell it you could never do that but if you're confident in your testing if you can demonstrate that you can keep the one percent away for whom that drug doesn't work bingo you now have a drug for the majority of the people and so many drugs that have so many benefits are not released and drugs are expensive because they fail at the last few moments you know the more testing you can do the more testing in memory the better it's going to be for everybody uh personally are we at a point where we still need human trials yes do we still need due diligence yes um we're not there yet exascale is you know it's coming it's not there yet yeah well to your point the faster the computer the more simulations and the higher the the chance that we're actually going to going to going to get it right and maybe compress that time to market but talk about some of the problems that you're working on uh and and the challenges for you know for example with the uk government and maybe maybe others that you can you can share with us help us understand kind of what you're hoping to accomplish so um within the united kingdom there was a report published um for the um for the uk research institute i think it's the uk research institute it might be epsrc however it's the body of people responsible for funding um science and there was a case a science case done for exascale i'm not a scientist um a lot of the work that was in this documentation said that a number of things that can be done today aren't good enough that we need to look further out we need to look at machines that will do much more there's been a program funded called asimov and this is a sort of a commercial problem that the uk government is working with rolls royce and they're trying to research how you build a full engine model and by full engine model i mean one that takes into account both the flow of gases through it and how those flow of gases and temperatures change the physical dynamics of the engine and of course as you change the physical dynamics of the engine you change the flow so you need a closely coupled model as air travel becomes more and more under the microscope we need to make sure that the air travel we do is as efficient as possible and currently there aren't supercomputers that have the performance one of the things i'm going to be doing as part of this sequence of conversations is i'm going to be having an in detailed uh sorry an in-depth but it will be very detailed an in-depth conversation with professor mark parsons from the edinburgh parallel computing center he's the director there and the dean of research at edinburgh university and i'm going to be talking to him about the azimoth program and and mark's experience as the person responsible for looking at exascale within the uk to try and determine what are the sort of science problems that we can solve as we move into the exoscale era and what that means for humanity what are the benefits for humans yeah and that's what i wanted to ask you about the the rolls-royce example that you gave it wasn't i if i understood it wasn't so much safety as it was you said efficiency and so that's that's what fuel consumption um it's it's partly fuel consumption it is of course safety there is a um there is a very specific test called an extreme event or the fan blade off what happens is they build an engine and they put it in a cowling and then they run the engine at full speed and then they literally explode uh they fire off a little explosive and they fire a fan belt uh a fan blade off to make sure that it doesn't go through the cowling and the reason they do that is there has been in the past uh a uh a failure of a fan blade and it came through the cowling and came into the aircraft depressurized the aircraft i think somebody was killed as a result of that and the aircraft went down i don't think it was a total loss one death being one too many but as a result you now have to build a jet engine instrument it balance the blades put an explosive in it and then blow the fan blade off now you only really want to do that once it's like car crash testing you want to build a model of the car you want to demonstrate with the dummy that it is safe you don't want to have to build lots of cars and keep going back to the drawing board so you do it in computers memory right we're okay with cars we have computational power to resolve to the level to determine whether or not the accident would hurt a human being still a long way to go to make them more efficient uh new materials how you can get away with lighter structures but we haven't got there with aircraft yet i mean we can build a simulation and we can do that and we can be pretty sure we're right um we still need to build an engine which costs in excess of 10 million dollars and blow the fan blade off it so okay so you're talking about some pretty complex simulations obviously what are some of the the barriers and and the breakthroughs that are kind of required you know to to do some of these things that you're talking about that exascale is going to enable i mean presumably there are obviously technical barriers but maybe you can shed some light on that well some of them are very prosaic so for example power exoscale machines consume a lot of power um so you have to be able to design systems that consume less power and that goes into making sure they're cooled efficiently if you use water can you reuse the water i mean the if you take a laptop and sit it on your lap and you type away for four hours you'll notice it gets quite warm um an exascale computer is going to generate a lot more heat several megawatts actually um and it sounds prosaic but it's actually very important to people you've got to make sure that the systems can be cooled and that we can power them yeah so there's that another issue is the software the software models how do you take a software model and distribute the data over many tens of thousands of nodes how do you do that efficiently if you look at you know gigaflop machines they had hundreds of nodes and each node had effectively a processor a core a thread of application we're looking at many many tens of thousands of nodes cores parallel threads running how do you make that efficient so is the software ready i think the majority of people will tell you that it's the software that's the problem not the hardware of course my friends in hardware would tell you ah software is easy it's the hardware that's the problem i think for the universities and the users the challenge is going to be the software i think um it's going to have to evolve you you're just you want to look at your machine and you just want to be able to dump work onto it easily we're not there yet not by a long stretch of the imagination yeah consequently you know we one of the things that we're doing is that we have a lot of centers of excellence is we will provide well i hate say the word provide we we sell super computers and once the machine has gone in we work very closely with the establishments create centers of excellence to get the best out of the machines to improve the software um and if a machine's expensive you want to get the most out of it that you can you don't just want to run a synthetic benchmark and say look i'm the fastest supercomputer on the planet you know your users who want access to it are the people that really decide how useful it is and the work they get out of it yeah the economics is definitely a factor in fact the fastest supercomputer in the planet but you can't if you can't afford to use it what good is it uh you mentioned power uh and then the flip side of that coin is of course cooling you can reduce the power consumption but but how challenging is it to cool these systems um it's an engineering problem yeah we we have you know uh data centers in iceland where it gets um you know it doesn't get too warm we have a big air cooled data center in in the united kingdom where it never gets above 30 degrees centigrade so if you put in water at 40 degrees centigrade and it comes out at 50 degrees centigrade you can cool it by just pumping it round the air you know just putting it outside the building because the building will you know never gets above 30 so it'll easily drop it back to 40 to enable you to put it back into the machine um right other ways to do it um you know is to take the heat and use it commercially there's a there's a lovely story of they take the hot water out of the supercomputer in the nordics um and then they pump it into a brewery to keep the mash tuns warm you know that's that's the sort of engineering i can get behind yeah indeed that's a great application talk a little bit more about your conversation with professor parsons maybe we could double click into that what are some of the things that you're going to you're going to probe there what are you hoping to learn so i think some of the things that that are going to be interesting to uncover is just the breadth of science that can be uh that could take advantage of exascale you know there are there are many things going on that uh that people hear about you know we people are interested in um you know the nobel prize they might have no idea what it means but the nobel prize for physics was awarded um to do with research into black holes you know fascinating and truly insightful physics um could it benefit from exascale i have no idea uh i i really don't um you know one of the most profound pieces of knowledge in in the last few hundred years has been the theory of relativity you know an austrian patent clerk wrote e equals m c squared on the back of an envelope and and voila i i don't believe any form of exascale computing would have helped him get there any faster right that's maybe flippant but i think the point is is that there are areas in terms of weather prediction climate prediction drug discovery um material knowledge engineering uh problems that are going to be unlocked with the use of exascale class systems we are going to be able to provide more tools more insight [Music] and that's the purpose of computing you know it's not that it's not the data that that comes out and it's the insight we get from it yeah i often say data is plentiful insights are not um ben you're a bit of an industry historian so i've got to ask you you mentioned you mentioned mentioned gigaflop gigaflops before which i think goes back to the early 1970s uh but the history actually the 80s is it the 80s okay well the history of computing goes back even before that you know yes i thought i thought seymour cray was you know kind of father of super computing but perhaps you have another point of view as to the origination of high performance computing [Music] oh yes this is um this is this is one for all my colleagues globally um you know arguably he says getting ready to be attacked from all sides arguably you know um computing uh the parallel work and the research done during the war by alan turing is the father of high performance computing i think one of the problems we have is that so much of that work was classified so much of that work was kept away from commercial people that commercial computing evolved without that knowledge i uh i have done in in in a previous life i have done some work for the british science museum and i have had the great pleasure in walking through the the british science museum archives um to look at how computing has evolved from things like the the pascaline from blaise pascal you know napier's bones the babbage's machines uh to to look all the way through the analog machines you know what conrad zeus was doing on a desktop um i think i think what's important is it doesn't matter where you are is that it is the problem that drives the technology and it's having the problems that requires the you know the human race to look at solutions and be these kicks started by you know the terrible problem that the us has with its nuclear stockpile stewardship now you've invented them how do you keep them safe originally done through the ascii program that's driven a lot of computational advances ultimately it's our quest for knowledge that drives these machines and i think as long as we are interested as long as we want to find things out there will always be advances in computing to meet that need yeah and you know it was a great conversation uh you're a brilliant guest i i love this this this talk and uh and of course as the saying goes success has many fathers so there's probably a few polish mathematicians that would stake a claim in the uh the original enigma project as well i think i think they drove the algorithm i think the problem is is that the work of tommy flowers is the person who took the algorithms and the work that um that was being done and actually had to build the poor machine he's the guy that actually had to sit there and go how do i turn this into a machine that does that and and so you know people always remember touring very few people remember tommy flowers who actually had to turn the great work um into a working machine yeah super computer team sport well ben it's great to have you on thanks so much for your perspectives best of luck with your conversation with professor parsons we'll be looking forward to that and uh and thanks so much for coming on thecube a complete pleasure thank you and thank you everybody for watching this is dave vellante we're celebrating exascale day you're watching the cube [Music]
SUMMARY :
that requires the you know the human
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
mark parsons | PERSON | 0.99+ |
ben bennett | PERSON | 0.99+ |
today | DATE | 0.99+ |
hundreds of nodes | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.98+ |
pandemic | EVENT | 0.98+ |
united kingdom | LOCATION | 0.98+ |
seymour cray | PERSON | 0.98+ |
one ear | QUANTITY | 0.98+ |
first vaccine | QUANTITY | 0.98+ |
mark | PERSON | 0.98+ |
four hours | QUANTITY | 0.97+ |
tens of thousands of nodes | QUANTITY | 0.97+ |
blaise pascal | PERSON | 0.97+ |
one percent | QUANTITY | 0.97+ |
50 degrees centigrade | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
40 | QUANTITY | 0.97+ |
nobel prize | TITLE | 0.97+ |
rolls royce | ORGANIZATION | 0.96+ |
each node | QUANTITY | 0.96+ |
early 1970s | DATE | 0.96+ |
hpc | ORGANIZATION | 0.96+ |
10 million dollars | QUANTITY | 0.95+ |
uk government | ORGANIZATION | 0.95+ |
fda | ORGANIZATION | 0.95+ |
united states | ORGANIZATION | 0.94+ |
both | QUANTITY | 0.94+ |
this morning | DATE | 0.94+ |
40 degrees centigrade | QUANTITY | 0.94+ |
one death | QUANTITY | 0.93+ |
hewlett packard | ORGANIZATION | 0.93+ |
earth | LOCATION | 0.93+ |
exascale | TITLE | 0.93+ |
above 30 | QUANTITY | 0.93+ |
99 of the population | QUANTITY | 0.92+ |
Why So Hard? | TITLE | 0.92+ |
uk research institute | ORGANIZATION | 0.92+ |
lots of cars | QUANTITY | 0.92+ |
exascale day | EVENT | 0.9+ |
conrad zeus | PERSON | 0.9+ |
first | QUANTITY | 0.9+ |
edinburgh university | ORGANIZATION | 0.89+ |
many years ago | DATE | 0.89+ |
asimov | TITLE | 0.88+ |
Exascale Day | EVENT | 0.88+ |
uk | LOCATION | 0.87+ |
professor | PERSON | 0.87+ |
parsons | PERSON | 0.86+ |
99 of | QUANTITY | 0.86+ |
above 30 degrees centigrade | QUANTITY | 0.85+ |
edward jenner | PERSON | 0.85+ |
alan turing | PERSON | 0.83+ |
things | QUANTITY | 0.83+ |
80s | DATE | 0.82+ |
epsrc | ORGANIZATION | 0.82+ |
last few hundred years | DATE | 0.82+ |
Exascale | TITLE | 0.8+ |
a lot of people | QUANTITY | 0.79+ |
covid19 | OTHER | 0.78+ |
hewlett-packard | ORGANIZATION | 0.77+ |
british | OTHER | 0.76+ |
tommy | PERSON | 0.75+ |
edinburgh parallel computing center | ORGANIZATION | 0.74+ |
one of | QUANTITY | 0.73+ |
nordics | LOCATION | 0.71+ |
so many drugs | QUANTITY | 0.7+ |
many | QUANTITY | 0.69+ |
many years | QUANTITY | 0.68+ |
lots and lots of lab work | QUANTITY | 0.68+ |
large numbers of people | QUANTITY | 0.68+ |
hpc | EVENT | 0.68+ |
people | QUANTITY | 0.68+ |
Naveen Rao, Intel | AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)
SUMMARY :
Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
20 watts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
10 million | QUANTITY | 0.99+ |
Naveen Rao | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
20 million | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Taiwan | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
100 radiologists | QUANTITY | 0.99+ |
Alan Turing | PERSON | 0.99+ |
Naveen | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
a month | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
two part | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one piece | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
Kate Darling | PERSON | 0.98+ |
early 2000s | DATE | 0.98+ |
two billion | QUANTITY | 0.98+ |
first smartphones | QUANTITY | 0.98+ |
one side | QUANTITY | 0.98+ |
Sands Convention Center | LOCATION | 0.97+ |
today | DATE | 0.97+ |
OpenVINO | TITLE | 0.97+ |
one radiologist | QUANTITY | 0.96+ |
Dr. | PERSON | 0.96+ |
16 year old | QUANTITY | 0.95+ |
two phases | QUANTITY | 0.95+ |
trillions of parameters | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
a million times | QUANTITY | 0.93+ |
seven years | QUANTITY | 0.93+ |
billions of observations | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.92+ |
one extreme | QUANTITY | 0.91+ |
two competing sides | QUANTITY | 0.9+ |
500 trillion model | QUANTITY | 0.9+ |
a year | QUANTITY | 0.89+ |
five | QUANTITY | 0.88+ |
each | QUANTITY | 0.88+ |
One area | QUANTITY | 0.88+ |
a couple of months ago | DATE | 0.85+ |
one sort | QUANTITY | 0.84+ |
two neural | QUANTITY | 0.82+ |
GANs | ORGANIZATION | 0.79+ |
couple of weeks | QUANTITY | 0.78+ |
DeepRacer | TITLE | 0.77+ |
millions of | QUANTITY | 0.76+ |
Photoshop | TITLE | 0.72+ |
deepfakes | ORGANIZATION | 0.72+ |
next few years | DATE | 0.71+ |
year | QUANTITY | 0.67+ |
re:Invent 2019 | EVENT | 0.66+ |
three | QUANTITY | 0.64+ |
Invent 2019 | EVENT | 0.64+ |
about | QUANTITY | 0.63+ |
James Kobielus, Wikibon | The Skinny on Machine Intelligence
>> Announcer: From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> In the early days of big data and Hadoop, the focus was really on operational efficiency where ROI was largely centered on reduction of investment. Fast forward 10 years and you're seeing a plethora of activity around machine learning, and deep learning, and artificial intelligence, and deeper business integration as a function of machine intelligence. Welcome to this Cube conversation, The Skinny on Machine Intelligence. I'm Dave Vellante and I'm excited to have Jim Kobielus here up from the District area. Jim, great to see you. Thanks for coming into the office today. >> Thanks a lot, Dave, yes great to be here in beautiful Marlboro, Massachusetts. >> Yes, so you know Jim, when you think about all the buzz words in this big data business, I have to ask you, is this just sort of same wine, new bottle when we talk about all this AI and machine intelligence stuff? >> It's actually new wine. But of course there's various bottles and they have different vintages, and much of that wine is still quite tasty, and let me just break it out for you, the skinny on machine intelligence. AI as a buzzword and as a set of practices really goes back of course to the early post-World War II era, as we know Alan Turing and the Imitation Game and so forth. There are other developers, theorists, academics in the '40s and the '50s and '60s that pioneered in this field. So we don't want to give Alan Turing too much credit, but he was clearly a mathematician who laid down the theoretical framework for much of what we now call Artificial Intelligence. But when you look at Artificial Intelligence as a ever-evolving set of practices, where it began was in an area that focused on deterministic rules, rule-driven expert systems, and that was really the state of the art of AI for a long, long time. And so you had expert systems in a variety of areas that became useful or used in business, and science, and government and so forth. Cut ahead to the turn of the millennium, we are now in the 21st century, and what's different, the new wine, is big data, larger and larger data sets that can reveal great insights, patterns, correlations that might be highly useful if you have the right statistical modeling tools and approaches to be able to surface up these patterns in an automated or semi-automated fashion. So one of the core areas is what we now call machine learning, which really is using statistical models to infer correlations, anomalies, trends, and so forth in the data itself, and machine learning, the core approach for machine learning is something called Artificial Neural Networks, which is essentially modeling a statistical model along the lines of how, at a very high level, the nervous system is made up, with neurons connected by synapses, and so forth. It's an analog in statistical modeling called a perceptron. The whole theoretical framework of perceptrons actually got started in the 1950s with the first flush of AI, but didn't become a practical reality until after the turn of this millennium, really after the turn of this particular decade, 2010, when we started to see not only very large big data sets emerge and new approaches for managing it all, like Hadoop, come to the fore. But we've seen artificial neural nets get more sophisticated in terms of their capabilities, and a new approach for doing machine learning, artificial neural networks, with deeper layers of perceptrons, neurons, called deep learning has come to the fore. With deep learning, you have new algorithms like convolutional neural networks, recurrent neural networks, generative adversarial neural networks. These are different ways of surfacing up higher level abstractions in the data, for example for face recognition and object recognition, voice recognition and so forth. These all depend on this new state of the art for machine learning called deep learning. So what we have now in the year 2017 is we have quite a mania for all things AI, much of it is focused on deep learning, much of it is focused on tools that your average data scientist or your average developer increasingly can use and get very productive with and build these models and train and test them, and deploy them into working applications like going forward, things like autonomous vehicles would be impossible without this. >> Right, and we'll get some of that. But so you're saying that machine learning is essentially math that infers patterns from data. And math, it's new math, math that's been around for awhile or. >> Yeah, and inferring patterns from data has been done for a long time with software, and we have some established approaches that in many ways predate the current vogue for neural networks. We have support vector machines, and decision trees, and Bayesian logic. These are different ways of approaches statistical for inferring patterns, correlations in the data. They haven't gone away, they're a big part of the overall AI space, but it's a growing area that I've only skimmed the surface of. >> And they've been around for many many years, like SVM for example. Okay, now describe further, add some color to deep learning. You sort of painted a picture of this sort of deep layers of these machine learning algorithms and this network with some depth to it, but help us better understand the difference between machine learning and deep learning, and then ultimately AI. >> Yeah, well with machine learning generally, you know, inferring patterns from data that I said, artificial neural networks of which the deep learning networks are one subset. Artificial neural networks can be two or more layers of perceptrons or neurons, they have relationship to each other in terms of their activation according to various mathematical functions. So when you look at an artificial neural network, it basically does very complex math equations through a combination of what they call scalar functions, like multiplication and so forth, and then you have these non-linear functions, like cosine and so forth, tangent, all that kind of math playing together in these deep structures that are triggered by data, data input that's processed according to activation functions that set weights and reset the weights among all the various neural processing elements, that ultimately output something, the insight or the intelligence that you're looking for, like a yes or no, is this a face or not a face, that these incoming bits are presenting. Or it might present output in terms of categories. What category of face is this, a man, a woman, a child, or whatever. What I'm getting at is that so deep learning is more layers of these neural processing elements that are specialized to various functions to be able to abstract higher level phenomena from the data, it's not just, "Is this a face," but if it's a scene recognition deep learning network, it might recognize that this is a face that corresponds to a person named Dave who also happens to be the father in the particular family scene, and by the way this is a family scene that this deep learning network is able to ascertain. What I'm getting at is those are the higher level abstractions that deep learning algorithms of various sorts are built to identify in an automated way. >> Okay, and these in your view all fit under the umbrella of artificial intelligence, or is that sort of an uber field that we should be thinking of. >> Yeah, artificial intelligence as the broad envelope essentially refers to any number of approaches that help machines to think like humans, essentially. When you say, "Think like humans," what does that mean actually? To do predictions like humans, to look for anomalies or outliers like a human might, you know separate figure from ground for example in a scene, to identify the correlations or trends in a given scene. Like I said, to do categorization or classification based on what they're seeing in a given frame or what they're hearing in a given speech sample. So all these cognitive processes just skim the surface, or what AI is all about, automating to a great degree. When I say cognitive, but I'm also referring to affective like emotion detection, that's another set of processes that goes on in our heads or our hearts, that AI based on deep learning and so forth is able to do depending on different types of artificial neural networks are specialized particular functions, and they can only perform these functions if A, they've been built and optimized for those functions, and B, they have been trained with actual data from the phenomenon of interest. Training the algorithms with the actual data to determine how effective the algorithms are is the key linchpin of the process, 'cause without training the algorithms you don't know if the algorithm is effective for its intended purpose, so in Wikibon what we're doing is in the whole development process, DevOps cycle, for all things AI, training the models through a process called supervised learning is absolutely an essential component of ascertaining the quality of the network that you've built. >> So that's the calibration and the iteration to increase the accuracy, and like I say, the quality of the outcome. Okay, what are some of the practical applications that you're seeing for AI, and ML, and DL. >> Well, chat bots, you know voice recognition in general, Siri and Alexa, and so forth. Without machine learning, without deep learning to do speech recognition, those can't work, right? Pretty much in every field, now for example, IT service management tools of all sorts. When you have a large network that's logging data at the server level, at the application level and so forth, those data logs are too large and too complex and changing too fast for humans to be able to identify the patterns related to issues and faults and incidents. So AI, machine learning, deep learning is being used to fathom those anomalies and so forth in an automated fashion to be able to alert a human to take action, like an IT administrator, or to be able to trigger a response work flow, either human or automated. So AI within IT service management, hot hot topic, and we're seeing a lot of vendors incorporate that capability into their tools. Like I said, in the broad world we live in in terms of face recognition and Facebook, the fact is when I load a new picture of myself or my family or even with some friends or brothers in it, Facebook knows lickity-split whether it's my brother Tom or it's my wife or whoever, because of face recognition which obviously depends, well it's not obvious to everybody, depends on deep learning algorithms running inside Facebook's big data network, big data infrastructure. They're able to immediately know this. We see this all around us now, speech recognition, face recognition, and we just take it for granted that it's done, but it's done through the magic of AI. >> I want to get to the development angle scenario that you specialize in. Part of the reason why you came to Wikibon is to really focus on that whole application development angle. But before we get there, I want to follow the data for a bit 'cause you mentioned that was really the catalyst for the resurgence in AI, and last week at the Wikibon research meeting we talked about this three-tiered model. Edge, as edge piece, and then something in the middle which is this aggregation point for all this edge data, and then cloud which is where I guess all the deep modeling occurs, so sort of a three-tier model for the data flow. >> John: Yes. >> So I wonder if you could comment on that in the context of AI, it means more data, more I guess opportunities for machine learning and digital twins, and all this other cool stuff that's going on. But I'm really interested in how that is going to affect the application development and the programming model. John Farrier has a phrase that he says that, "Data is the new development kit." Well, if you got all this data that's distributed all over the place, that changes the application development model, at least you think it does. So I wonder if you could comment on that edge explosion, the data explosion as a result, and what it means for application development. >> Right, so more and more deep learning algorithms are being pushed to edge devices, by that I mean smartphones, and smart appliances like the ones that incorporate Alexa and so forth. And so what we're talking about is the algorithms themselves are being put into CPUs and FPGAs and ASICs and GPUs. All that stuff's getting embedded in everything that we're using, everything's that got autonomous, more and more devices have the ability if not to be autonomous in terms of making decisions, independent of us, or simply to serve as augmentation vehicles for our own whatever we happen to be doing thanks to the power of deep learning at the client. Okay, so when deep learning algorithms are embedded in say an internet of things edge device, what the deep learning algorithms are doing is A, they're ingesting the data through the sensors of that device, B, they're making inferences, deep learning algorithmic-driven inferences, based on that data. It might be speech recognition, face recognition, environmental sensing and being able to sense geospatially where you are and whether you're in a hospitable climate for whatever. And then the inferences might drive what we call actuation. Now in the autonomous vehicle scenario, the autonomous vehicle is equipped with all manner of sensors in terms of LiDAR and sonar and GPS and so forth, and it's taking readings all the time. It's doing inferences that either autonomously or in conjunction with inferences that are being made through deep learning and machine learning algorithms that are executing in those intermediary hubs like you described, or back in the cloud, or in a combination of all of that. But ultimately, the results of all those analytics, all those deep learning models, feed the what we call actuation of the car itself. Should it stop, should it put on the brakes 'cause it's about to hit a wall, should it turn right, should it turn left, should it slow down because it happened to have entered a new speed zone or whatever. All of the decisions, the actions that the edge device, like a car would be an edge device in this scenario, are being driven by evermore complex algorithms that are trained by data. Now, let's stay with the autonomous vehicle because that's an extreme case of a very powerful edge device. To train an autonomous vehicle you need of course lots and lots of data that's acquired from possibly a prototype that you, a Google or a Tesla, or whoever you might be, have deployed into the field or your customers are using, B, proving grounds like there's one out by my stomping ground out in Ann Arbor, a proving ground for the auto industry for self-driving vehicles and gaining enough real training data based on the operation of these vehicles in various simulated scenarios, and so forth. This data is used to build and iterate and refine the algorithms, the deep learning models that are doing the various operations of not only the vehicles in isolation but the vehicles operating as a fleet within an entire end to end transportation system. So what I'm getting at, is if you look at that three-tier model, then the edge device is the car, it's running under its own algorithms, the middle tier the hub might be a hub that's controlling a particular zone within a traffic system, like in my neck of the woods it might be a hub that's controlling congestion management among self-driving vehicles in eastern Fairfax County, Virginia. And then the cloud itself might be managing an entire fleet of vehicles, let's say you might have an entire fleet of vehicles under the control of say an Uber, or whatever is managing its own cars from a cloud-based center. So when you look at the tiering model that analytics, deep learning analytics is being performed, increasingly it will be for various, not just self-driving vehicles, through this tiered model, because the edge device needs to make decisions based on local data. The hub needs to make decisions based on a wider view of data across a wider range of edge entities. And then the cloud itself has responsibility or visibility for making deep learning driven determinations for some larger swath. And the cloud might be managing both the deep learning driven edge devices, as well as monitoring other related systems that self-driving network needs to coordinate with, like the government or whatever, or police. >> So envisioning that three-tier model then, how does the programming paradigm change and evolve as a result of that. >> Yeah, the programming paradigm is the modeling itself, the building and the training and the iterating the models generally will stay centralized, meaning to do all these functions, I mean to do modeling and training and iteration of these models, you need teams of data scientists and other developers who are both adept as to statistical modeling, who are adept at acquiring the training data, at labeling it, labeling is an important function there, and who are adept at basically developing and deploying one model after another in an iterative fashion through DevOps, through a standard release pipeline with version controls, and so forth built in, the governance built in. And that's really it needs to be a centralized function, and it's also very compute and data intensive, so you need storage resources, you need large clouds full of high performance computing, and so forth. Be able to handle these functions over and over. Now the edge devices themselves will feed in the data in just the data that is fed into the centralized platform where the training and the modeling is done. So what we're going to see is more and more centralized modeling and training with decentralized execution of the actual inferences that are driven by those models is the way it works in this distributive environment. >> It's the Holy Grail. All right, Jim, we're out of time but thanks very much for helping us unpack and giving us the skinny on machine learning. >> John: It's a fat stack. >> Great to have you in the office and to be continued. Thanks again. >> John: Sure. >> All right, thanks for watching everybody. This is Dave Vellante with Jim Kobelius, and you're watching theCUBE at the Marlboro offices. See ya next time. (upbeat music)
SUMMARY :
Announcer: From the SiliconANGLE Media office Thanks for coming into the office today. Thanks a lot, Dave, yes great to be here in beautiful So one of the core areas is what we now call math that infers patterns from data. that I've only skimmed the surface of. the difference between machine learning might recognize that this is a face that corresponds to a of artificial intelligence, or is that sort of an Training the algorithms with the actual data to determine So that's the calibration and the iteration at the server level, at the application level and so forth, Part of the reason why you came to Wikibon is to really all over the place, that changes the application development devices have the ability if not to be autonomous in terms how does the programming paradigm change and so forth built in, the governance built in. It's the Holy Grail. Great to have you in the office and to be continued. and you're watching theCUBE at the Marlboro offices.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Kobelius | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
John Farrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
21st century | DATE | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Alan Turing | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Siri | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Alexa | TITLE | 0.99+ |
Marlboro | LOCATION | 0.99+ |
Tom | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
10 years | QUANTITY | 0.98+ |
Ann Arbor | LOCATION | 0.98+ |
1950s | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Marlboro, Massachusetts | LOCATION | 0.97+ |
one | QUANTITY | 0.96+ |
2017 | DATE | 0.95+ |
three-tier | QUANTITY | 0.95+ |
2010 | DATE | 0.95+ |
World War II | EVENT | 0.95+ |
first flush | QUANTITY | 0.94+ |
three-tier model | QUANTITY | 0.93+ |
Alan Turing | TITLE | 0.88+ |
'50s | DATE | 0.88+ |
eastern Fairfax County, Virginia | LOCATION | 0.87+ |
The Skinny on Machine Intelligence | TITLE | 0.87+ |
Wikibon | TITLE | 0.87+ |
one model | QUANTITY | 0.86+ |
'40s | DATE | 0.85+ |
Cube | ORGANIZATION | 0.84+ |
DevOps | TITLE | 0.83+ |
three-tiered | QUANTITY | 0.82+ |
one subset | QUANTITY | 0.81+ |
The Skinny | ORGANIZATION | 0.81+ |
'60s | DATE | 0.8+ |
Imitation Game | TITLE | 0.79+ |
more layers | QUANTITY | 0.74+ |
theCUBE | ORGANIZATION | 0.73+ |
SiliconANGLE Media | ORGANIZATION | 0.72+ |
post- | DATE | 0.56+ |
decade | DATE | 0.46+ |