Kurt Kuckein, DDN Storage, and Darrin Johnson, NVIDIA | CUBEConversation, Sept 2018
[Music] [Applause] I'll Buena Burris and welcome to another cube conversation from our fantastic studios in beautiful palo alto california today we're going to be talking about what infrastructure can do to accelerate AI and specifically we're gonna use a relationship a burgeoning relationship between PDN and nvidia to describe what we can do to accelerate AI workloads by using higher performance smarter and more focused of infrastructure for computing now to have this conversation we've got two great guests here we've got Kurt ku kind who is the senior director of marketing at ddn and also Darren Johnson is a global director of technical marketing for enterprise and NVIDIA Kurt Gerron welcome to the cube thanks for thank you very much so let's get going on this because this is a very very important topic and I think it all starts with this notion of that there is a relationship that you guys have put forward Kurt once you describe it sure well so what we're announcing today is ddn's a3i architecture powered by Nvidia so it is a full rack level solution a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply very completely so if we think about how this is gonna or why this is important AI workloads clearly have a special stress on underlying technology Darin talk to us a little bit about the nature of these workloads and why in particular things like GPUs and other technologies are so important to make them go fast absolutely and as you probably know AI is all about the data whether you're doing medical imaging whether you're doing natural language processing whatever it is it's all driven by the data the more data that you have the better results that you get but to drive that data into the GPUs you need great IO and that's why we're here today to talk about ddn and the partnership of how to bring that I owe to the GPUs on our dgx platforms so if we think about what you described a lot of small files off and randomly just riveted with nonetheless very high-profile jobs that just can't stop midstream and start over absolutely and if you think about the history of high-performance computing which is very similar to a I really I owe is just that lots of files you have to get it they're low latency high throughput and that's why ddn's probably nearly twenty years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput gives you that low latency just helps drive the GPU so we you'd mention HPC from 20 years of experience now it used to be that HPC you'd have scientists with a bunch of graduate students setting up some of these big honkin machines but now we're moving into the commercial domain you don't have graduate students running around you don't have very low cost high quality people you're you know a lot of administrators who nonetheless good people but a lot to learn so how does this relationship actually start making or bringing AI within reach of the commercial world exactly where this reference architecture comes in right so a customer doesn't need to start from scratch they have a design now that allows them to quickly implement AI it's something that's really easily deployable we've fully integrated this solution ddn has made changes to our parallel file system appliance to integrate directly within the DG x1 environment makes that even easier to deploy from there and extract the maximum performance out of this without having to run around and tune a bunch of knobs change a bunch of settings it's really gonna work out of the box and the you know nvidia has done more than just the DG x1 it's more than hardware you've done a lot of optimization of different of AI toolkits if Sarah I'm talking what about that Darin yeah so I mean talking about the example I use researchers in the past with HPC what we have today are data scientists data scientists understand pie tours they understand tensorflow they understand the frameworks they don't want to understand the underlying filesystem networking RDMA InfiniBand any of that they just want to be able to come in run their tensorflow get the data get the results and just turn that keep turning that whether it's a single GPU or 90 Jex's or as many dejection as you want so this solution helps bring that to customers much easier so those data scientists don't have to be system administrators so a reference architecture that makes things easier but that's more than just for some of these commercial things it's also the overall ecosystem new application providers application developers how is this going to impact the aggregate ecosystem it's growing up around the need to do AI related outcomes well I think one point that Darrin was getting to you there and one of the big effects is also as these ecosystems reach a point where they're going to need to scale right there's somewhere where ddn has tons of experience right so many customers are starting off with smaller data sets they still need the performance a parallel file system in that case is going to deliver that performance but then also as they grow right going from one GPU to 90 G X's is going to be an incredible amount of both performance scalability that they're going to need from their i/o as well as probably capacity scalability and that's another thing that we've made easy with a3i is being able to scale that environment seamlessly within a single namespace so that people don't have to deal with a lot of again tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful right so in the end it is the application that's most important to both of us right it's it's not the infrastructure it's making the discoveries faster it's processing information out in the field faster it's doing analysis of the MRI faster it's you know helping the doctors helping the anybody who's using this to really make faster decisions better decisions exactly and just to add to that I mean in automotive industry you have datasets that are from 50 to 500 petabytes and you need access to all that data all the time because you're constantly training and Retraining to create better models to create better autonomous vehicles and you need you need the performance to do that ddn helps bring that to bear and with this reference architecture simplifies it so you get the value add of nvidia gpus plus its ecosystem of software plus DD on its match made in heaven Darren Johnson Nvidia Curt Koo Kien ddn thanks very much for being on the cube thank you very much and I'm Peter burrs and once again I'd like to thank you for watching this cube conversation until next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Darren Johnson | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
Kurt Kuckein | PERSON | 0.99+ |
Sarah | PERSON | 0.99+ |
Sept 2018 | DATE | 0.99+ |
ddn | ORGANIZATION | 0.99+ |
nvidia | ORGANIZATION | 0.99+ |
Kurt Gerron | PERSON | 0.99+ |
Kurt | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Darrin Johnson | PERSON | 0.99+ |
today | DATE | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.98+ |
one point | QUANTITY | 0.96+ |
500 petabytes | QUANTITY | 0.96+ |
Curt Koo Kien | PERSON | 0.96+ |
PDN | ORGANIZATION | 0.96+ |
palo alto california | LOCATION | 0.95+ |
one GPU | QUANTITY | 0.94+ |
one | QUANTITY | 0.93+ |
DDN Storage | ORGANIZATION | 0.92+ |
Peter burrs | PERSON | 0.88+ |
nearly twenty years | QUANTITY | 0.86+ |
lots of files | QUANTITY | 0.85+ |
90 G X | QUANTITY | 0.83+ |
single namespace | QUANTITY | 0.79+ |
Burris | PERSON | 0.75+ |
single GPU | QUANTITY | 0.74+ |
DG x1 | TITLE | 0.74+ |
90 Jex | QUANTITY | 0.66+ |
a lot of small files | QUANTITY | 0.62+ |
gpus | COMMERCIAL_ITEM | 0.61+ |
Darrin | ORGANIZATION | 0.56+ |
experience | QUANTITY | 0.52+ |
9_20_18 with Peter, Kuckein & Johnson DDN
>> What up universe? Welcome to our theCUBE conversation from our fantastic studios in beautiful Palo Alto, California. Today we're going to be talking about what infrastructure can do to accelerate AI. And specifically we're going to use a relationship, a burgeoning relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter, and more focused infrastructure for computing. Now to have this conversation, we've got two great guests, here. We've got Kurt Kuckein, who's the senior director of marketing at DDN. And also Darren Johnson, who's the global director of technical marketing for Enterprise and NVIDIA. Kurt, Darren, welcome to theCUBE. >> Thanks For having us. >> Thank you very much. >> So let's get going on this because this is a very, very important topic. And I think it all starts with this notion of that there is a relationship that you guys put forth. Kurt, why don't you describe it. >> So what we're announcing today is the ends A3I architecture, powered by NVIDIA. So it is a full, rack-level solution, a reference to architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply very completely. >> So if we think about how this or why this is important, AI workloads clearly have a special stress on underlying technology. Darren, talk to us a little bit about the nature of these workloads, and why in particular, things like GPU's and other technologies are so important to make them go fast. >> Absolutely. And as you probably know AI is all about the data. Whether you're doing medical imaging, or whether your doing actual language processing, whatever it is, it's all driven by the data. The more data that you have, the better results that you get. But to drive that data into the GPU's, you need great IO. And that's why we're here today, to talk about DDN and the partnership and how to bring that IO to the GPU's on our DJX platforms. >> So if we think about what you describe, a lot of small files, often randomly distributed, with nonetheless very high profile jobs that just can't stop this dream and start over. >> Absolutely. And if you think about the history of high-performance computing, which is very similar to AI, really IO is just that, lots of files, you have to get it there, low latency, high throughput and that's why DDN's probably nearly 20 years of experience working in that exact same domain is perfect. Because you get the parallel file system which gives you that throughput, gives you that low latency, just helps drive the GPU. >> So you mentioned HPC from twenty years of experience, now, it used to be that HPC you'd have some scientists with a bunch of graduate students, setting up some of these big, honking machines. But now we're moving with commercial domain. You don't have graduate students running around. You don't have very low cost, high quality people here. So, you know, there's a lot of administrators who nonetheless good people, but want to learn. So, how does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why don't- >> That's exactly where this reference architecture comes in right. So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI, It's something that's really easily deployable. We've fully integrated this solution. DDN has made changes to our parallel file system appliance to integrate directly within the DGX-1 environment. That makes that even easier to deploy from there. And extract the maximum performance out of this without having to run around and tune a bunch of knobs, change a bunch of settings, it's really going to work out of the box. >> And you know it's really done more than just the DGX-1, it's more than hardware. You've done a lot of optimization of different AI toolkits, et cetera et cetera. Talk a little about that Darren. >> Yeah so, I mean, talking about the example used, researchers in the past with HPC, what we have today are data scientists. Data scientists understand pi charts, they understand tenser flow, they understand the frameworks. They don't want to understand the underlying file system, networking, RDMA, InfiniBand, any of that. They just want to be able to come in, run their tenser flow, get the data, get the results. And just churn that, keep churning that, whether it's a single GPU or 90 DJX's or as many DJX's as you want. So this solution helps bring that to customers much easier so those data scientists don't have to be system administrators. >> So, reference architecture that makes things easier. But it's more than just for some of these commercial things. It's also the overall ecosystem, you have application providers, application developers. How is this going to impact the average ecosystem that's growing up around the need to do AI related outcomes? >> Well, I think the one point that Darren was getting to there, and one of the big impacts is also as these ecosystems reach a point where they're going to need a scale. There's somewhere where DDN has tons of experience. So many customers are starting off with smaller data sets, they still need the performance, the parallel file system in that case is going to deliver that performance. But then also, as they grow, going from one GPU to 90 DJX's is going to be an incredible amount of both performance scalability that they're going to need from their IO, as well as probably capacity, scalability. And that's another thing that we've made easy with A3I, is being able to scale that environment seamlessly, within a single name space so that people don't have to deal with a lot of, again, tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as their successful. In the end, it is the application that's most important to both of us. It's not the end of a structure, it's making the discoveries faster, it's processing the information out in the field faster, it's doing analysis of the MRI faster, and helping the doctor, helping anybody who's using this to really make faster decisions, better decisions. >> Exactly. And just to add to that, in automotive industry, you have data sets that are from 50 to 500 petabytes, and you need access to all that data, all the time, because you're constantly training and retraining to create better models, to create better autonomous vehicles. And you need the performance to do that. DDN helps bring that to bear, and with this reference architecture, simplifies it. So you get the value add of InfiniData GPU's plus its ecosystem is software plus DDN is a match made in Heaven. >> Darren Johnson, NVIDIA, Kurt Kuckein, DDN. Thanks very much for being on theCube. >> Thank you very much. >> Glad I could be here. >> And I'm Peter Burns, and once again I'd like to thank you for watching this Cube Conversation. Until next time.
SUMMARY :
and NVIDIA to describe what we can do of that there is a relationship that you guys put forth. a reference to architecture that's been Darren, talk to us a little bit about the nature But to drive that data into the GPU's, you need great IO. So if we think about what you describe, lots of files, you have to get it there, low latency, So you mentioned HPC from twenty years of experience, change a bunch of settings, it's really going to work And you know it's really done more than just the DGX-1, that to customers much easier so those data scientists How is this going to impact the average ecosystem in that case is going to deliver that performance. that are from 50 to 500 petabytes, and you need access Thanks very much for being on theCube. And I'm Peter Burns, and once again I'd like to thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NVIDIA | ORGANIZATION | 0.99+ |
DDN | ORGANIZATION | 0.99+ |
Kurt | PERSON | 0.99+ |
Kurt Kuckein | PERSON | 0.99+ |
Darren Johnson | PERSON | 0.99+ |
Darren | PERSON | 0.99+ |
twenty years | QUANTITY | 0.99+ |
Peter Burns | PERSON | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
90 DJX | QUANTITY | 0.98+ |
500 petabytes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
two great guests | QUANTITY | 0.97+ |
one GPU | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
one point | QUANTITY | 0.95+ |
nearly 20 years | QUANTITY | 0.94+ |
InfiniData | ORGANIZATION | 0.92+ |
single name | QUANTITY | 0.86+ |
theCUBE | ORGANIZATION | 0.83+ |
DGX-1 | TITLE | 0.83+ |
A3I | OTHER | 0.82+ |
Peter | PERSON | 0.78+ |
single GPU | QUANTITY | 0.7+ |
Johnson | PERSON | 0.54+ |
Kuckein | PERSON | 0.51+ |
A3I | TITLE | 0.5+ |
9_20_18 DDN Nvidia Launch about Benchmarking with PETER & KURT KUCKEIN
(microphone not on) >> be 47 (laughter) >> Are you ready? >> Here we go, alright and, three, two... >> You know it's great to see real benchmarking data, because this is a very important domain and there is not a lot of benchmarking information out there around some of these other products that are available. But let's try to to turn that benchmarking information into business outcomes, and to do that we got, Kurt Kuckein, back from DDN. Kurt welcome back let's talk a bit about how are these high value outcomes that business seeks with AI going to be achieved as a consequence of this new performance, faster capabilities, etcetera. >> So there's a couple of considerations, the first consideration I think is just the selection of AI infrastructure itself. Right, we have customers telling us constantly that they don't know where to start. Now that they have readily available reference architectures that tell them, hey here's something you can implement get installed quickly, you're up and running, running your AI from day one. >> So the decision process for what to get is reduced. >> Exactly. >> Okay. >> Uh, number two is you're unlocking all ends of the investment with something like this right? You're maximizing the performance on the GPU side. You're maximizing the performance on the ingest side for the storage. You're maximizing the through-put of the entire system, so you're really gaining the most out of your investment there. And not just gaining the most out of the investment, but truly accelerating the application and that's the end goal right, that we're looking for with customers. Plenty of people can deliver fast storage, but it does- If it doesn't impact the application and deliver faster results, cut run times down, then what are you really gaining from having fast storage? And so that where we're focused, we're focused on application acceleration. >> So simpler architecture, faster implementation based on that, integrated capabilities, ultimately, all revealing or all resulting in, better application performance. >> Better application performance, and in the end something that's more reliable as well. >> Kurt, thanks for again for being on The Cube. >> Thanks for having me.
SUMMARY :
and to do that we got, Kurt Kuckein, back from DDN. the first consideration I think is just You're maximizing the performance on the GPU side. So simpler architecture, and in the end something that's more reliable as well.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kurt Kuckein | PERSON | 0.99+ |
Kurt | PERSON | 0.99+ |
KURT KUCKEIN | PERSON | 0.99+ |
PETER | PERSON | 0.99+ |
first consideration | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
three | QUANTITY | 0.94+ |
47 | QUANTITY | 0.93+ |
DDN | ORGANIZATION | 0.91+ |
day one | QUANTITY | 0.84+ |
number two | QUANTITY | 0.79+ |
Nvidia | ORGANIZATION | 0.79+ |
Cube | COMMERCIAL_ITEM | 0.59+ |
DDN | EVENT | 0.43+ |
9_20_18 DDN Nvidia Launch AI & Storage with PETER & KURT KUCKEIN
(laughing) >> This is V-3. >> Alec, you're going to open up, we're going to cut, come to you in a second. Good luck, buddy. Okay, here we go. Alright Peter, ready? >> Yup. >> And we're coming to you in. >> Hold on guys, sorry, I lied. (laughing) V-2, V-3, there it is. Okay, ready. >> Now you're ready? >> Yup. >> You're ready ready? Okay here we go, ready and, three, two. >> Hi, I'm Peter Burris, welcome to another Cube Conversation from our wonderful studios in beautiful Palo Alto, California. Great conversation today, we're going to be talking about the relationship between AI, business, and especially some of the new infrastructure technologies in the storage part of the stack. And to join me in this endeavor is Kurt Kuckein, who's a senior director of product marketing at DDN. Kurt Kuckein, welcome to The Cube. >> Thanks, Peter, happy to be here. >> So tell us a little bit about DDN to start. >> So DDN is a storage company that's been around for 20 years. We've got a legacy in high-performance computing, and that's what we see a lot of similarities with this new AI workload. DDN is well-known in that HPC community; if you look at the top 100 supercomputers in the world we're attached to 75-percent of them and so we have a fundamental understanding of that type of scalable need that's where we're focused, we're focused on performance requirements, we're focused on scalability requirements, which can mean multiple things, right, it can mean the scaling of performance, it can mean the scaling of capacity, and we're very flexible. >> Well let me stop you and say, so you've got a lot of customers in the high-performance world, and a lot of those customers are at the vanguard of moving to some of these new AI workloads. What are customers saying? With this significant engagement that you have with the best and the brightest out there, what are they saying about this transition to AI? >> Well I think it's fascinating that we kind of have a bifurcated customer base here, where we have those traditionalists who probably have been looking at AI for over 40 years, right, and they've been exploring this idea and they've gone through the peaks and troughs in the promise of AI, and then contraction because CPUs weren't powerful enough. Now we've got this emergence of GPUs in the supercomputing world, and if you look at how the supercomputing world has expanded in the last few years, it is through investment in GPUs. And then we've got an entirely different segment, which is a much more commercial segment, and they're maybe newly invested in this AI arena, right, they don't have the legacy of 30, 40 years of research behind them, and they are trying to figure out exactly, you know, what do I do here? A lot of companies are coming to us, hey, I have an AI initiative, well what's behind it? Well, we don't know yet, but we've got to have something and they don't understand where is this infrastructure going to come from. >> So the general availability of AI technologies, and obviously Flash has been a big part of that, very high-speed networks within data centers, virtualization certainly helps as well, now opens up the possibility for using these algorithms, some of which have been around for a long time, but have required very specialized bespoke configurations of hardware, to the enterprise. That still begs the question, there are some differences between high-performance computing workloads and AI workloads. Let's start with some of the, what are the similarities, and then let's explore some of the differences. >> So the biggest similarity, I think, is just it's an intractable, hard IO problem, right, at least from the storage perspective. It requires a lot of high throughput, depending on where those IO characteristics are from, it can be very small-file, high-op-intensive type workflows, but it needs the ability of the entire infrastructure to deliver all of that seamlessly from end to end. >> So really high-performance throughput so that you can get to the data you need and keep this computing element saturated. >> Keeping the GPU saturated is really the key, that's where the huge investment is. >> So how do AI and HPC workloads differ? >> So how they're fundamentally different is often AI workloads operate on a smaller scale in terms of the amount of capacity, at least today's AI workloads. As soon as a project encounters success, what our forecast is, is those things will take off and you'll want to apply those algorithms bigger and bigger data sets. But today, you know, we encounter things like 10-terabyte data sets, 50-terabyte data sets and a lot of customers are focused only on that. But what happens when you're successful, how do you scale your current infrastructure to petabytes and multi-petabytes when you'll need it in the future? >> So when I think of HPC, I think of often very, very big batch jobs, very, very large, complex data sets. When I think about AI, like image processing or voice processing, whatever else it might be, I think of a lot of small files, randomly accessed. >> Right. >> That require nonetheless some very complex processing, that you don't want to have to restart all the time. >> Right. >> And a degree of simplicity that's required to make sure that you have the people that can do it. Have I got that right? >> You've got it right. Now one, I think, misconception is, is on the HPC side, right, that whole random small file thing has come in in the last five, 10 years and it's something DDN's been working on quite a bit, right. Our legacy was in high-performance throughput workloads, but the workloads have evolved so much on the HPC side as well, and, as you posited at the beginning, so much of it has become AI and deep-learning research >> Right, so they look a lot more alike. >> They do look a lot more alike. >> So if we think about the revolving relationship now between some of these new data-first workloads, AI-oriented, change the way the business operates types of stuff, what do you anticipate is going to be the future of the relationship between AI and storage? >> Well, what we foresee really is that the explosion in AI needs and AI capabilities is going to mimic what we already see and really drive what we see on the storage side, right? We've been showing that graph for years and years and years of just everything going up and to the right, but as AI starts working on itself and improving itself, as the collection means keep getting better and more sophisticated and have increased resolutions, whether you're talking about cameras or in life sciences, acquisition capabilities just keep getting better and better and the resolutions get better and better, it's more and more data, right? And you want to be able to expose a wide variety of data to these algorithms; that's how they're going to learn faster. And so what we see is that the data-centric part of the infrastructure is going to need to scale, even if you're starting today with a smaller workload. >> Kurt Kuckein, DDN, thanks very much for being on The Cube. >> Thanks for having me. >> And once again, this is Peter Burris with another Cube Conversation, thank you very much for watching. Until next time. (electronic whooshing)
SUMMARY :
we're going to cut, come to you in a second. Hold on guys, sorry, I lied. Okay here we go, ready and, three, two. and especially some of the new infrastructure technologies and that's what we see a lot of similarities in the high-performance world, and if you look at how the supercomputing world has expanded So the general availability of AI technologies, but it needs the ability of the entire infrastructure so that you can get to the data you need Keeping the GPU saturated is really the key, in terms of the amount of capacity, So when I think of HPC, I think of that you don't want to have to restart all the time. to make sure that you have the people that can do it. is on the HPC side, right, and the resolutions get better and better, thank you very much for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter | PERSON | 0.99+ |
50-terabyte | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
10-terabyte | QUANTITY | 0.99+ |
Kurt Kuckein | PERSON | 0.99+ |
KURT KUCKEIN | PERSON | 0.99+ |
DDN | ORGANIZATION | 0.99+ |
PETER | PERSON | 0.99+ |
30, 40 years | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
75-percent | QUANTITY | 0.99+ |
Alec | PERSON | 0.99+ |
over 40 years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
Nvidia | ORGANIZATION | 0.95+ |
three | QUANTITY | 0.93+ |
100 supercomputers | QUANTITY | 0.92+ |
10 years | QUANTITY | 0.91+ |
20 years | QUANTITY | 0.9+ |
years | QUANTITY | 0.89+ |
Cube | COMMERCIAL_ITEM | 0.87+ |
V-2 | OTHER | 0.86+ |
V-3 | OTHER | 0.85+ |
one | QUANTITY | 0.79+ |
five | QUANTITY | 0.73+ |
The Cube | ORGANIZATION | 0.72+ |
first | QUANTITY | 0.7+ |
last few years | DATE | 0.67+ |
second | QUANTITY | 0.63+ |
Cube | ORGANIZATION | 0.55+ |
DDN | PERSON | 0.54+ |
9_20_18 | DATE | 0.45+ |
last | QUANTITY | 0.39+ |
DDN Chrowdchat | October 11, 2018
(uptempo orchestral music) >> Hi, I'm Peter Burris and welcome to another Wikibon theCUBE special feature. A special digital community event on the relationship between AI, infrastructure and business value. Now it's sponsored by DDN with participation from NIVIDA, and over the course of the next hour, we're going to reveal something about this special and evolving relationship between sometimes tried and true storage technologies and the emerging potential of AI as we try to achieve these new business outcomes. So to do that we're going to start off with a series of conversations with some thought leaders from DDN and from NVIDIA and at the end, we're going to go into a crowd chat and this is going to be your opportunity to engage these experts directly. Ask your questions, share your stories, find out what your peers are thinking and how they're achieving their AI objectives. That's at the very end but to start, let's begin the conversation with Kurt Kuckein who is a senior director of marketing at DDN. >> Thanks Peter, happy to be here. >> So tell us a little bit about DNN at the start. >> So DDN is a storage company that's been around for 20 years. We've got a legacy in high performance computing, and that's what we see a lot of similarities with this new AI workload. DDN is well known in that HPC community. If you look at the top 100 super computers in the world, we're attached to 75% of them. And so we have the fundamental understanding of that type of scalable need, that's where we're focused. We're focused on performance requirements. We're focused on scalability requirements which can mean multiple things. It can mean the scaling of performance. It can mean the scaling of capacity, and we're very flexible. >> Well let me stop you and say, so you've got a lot of customers in the high performance world. And a lot of those customers are at the vanguard of moving to some of these new AI workloads. What are customer's saying? With this significant engagement that you have with the best and the brightest out there. What are they saying about this transition to AI? >> Well I think it's fascinating that we have a bifurcated customer base here where we have those traditionalist who probably have been looking at AI for over 40 years, and they've been exploring this idea and they've gone to the peaks and troughs in the promise of AI, and then contraction because CPUs weren't powerful enough. Now we've got this emergence of GPS in the super computing world. And if you look at how the super computing world has expanded in the last few years. It is through investment in GPUs. And then we've got an entirely different segment which is a much more commercial segment, and they may be newly invested in this AI arena. They don't have the legacy of 30, 40 years of research behind them, and they are trying to figure out exactly what do I do here. A lot of companies are coming to us. Hey, I have an AI initiative. Well, what's behind it? We don't know yet but we've got to have something, and they don't you understand where is this infrastructure going to come from. >> So a general availability of AI technologies and obviously flash has been a big part of that. Very high speed networks within data centers. Virtualization certainly helps as well. Now opens up the possibility for using these algorithms, some of which have been around for a long time that require very specialized bespoke configurations of hardware to the enterprise. That still begs the question. There are some differences between high performance computing workloads and AI workloads. Let's start with some of the, what are the similarities and let's explore some of the differences. >> So the biggest similarity I think is it's an intractable hard IO problem. At least from the storage perspective, it requires a lot of high throughput. Depending on where those idle characteristics are from. It can be a very small file, high opt intensive type workflows but it needs the ability of the entire infrastructure to deliver all of that seamlessly from end to end. >> So really high performance throughput so that you can get to the data you need and keep this computing element saturated. >> Keeping the GPU saturated is really the key. That's where the huge investment is. >> So how do AI and HPC workloads differ? >> So how they are fundamentally different is often AI workloads operate on a smaller scale in terms of the amount of capacity, at least today's AI workloads, right? As soon as a project encounter success, what our forecast is is those things will take off and you'll want to apply those algorithm games bigger and bigger data sets. But today, we encounter things like 10 terabyte data sets, 50 terabyte data sets, and a lot of customers are focused only on that but what happens when you're successful? How you scale your current infrastructure to petabytes and multi petabytes when you'll need it in the future. >> So when I think of HPC, I think of often very, very big batch jobs. Very, very large complex datasets. When I think about AI, like image processing or voice processing whatever else it might be. Like for a lot of small files randomly access that require nonetheless some very complex processing that you don't want to have to restart all the time and the degree of some pushing that's required to make sure that you have the people that can do. Have I got that right? >> You've got that right. Now one, I think misconception is on the HPC side, that whole random small file thing has come in in the last five, 10 years, and it's something DDN have been working on quite a bit. Our legacy was in high performance throughput workloads but the workloads have evolved so much on the HPC side as well, and as you posited at the beginning so much of it has become AI and deep learning research. >> Right, so they look a lot more alike. >> They do look a lot more alike. >> So if we think about the revolving relationship now between some of these new data first workloads, AI oriented change the way the business operates type of stuff. What do you anticipate is going to be the future of the relationship between AI and storage? >> Well, what we foresee really is that the explosion in AI needs and AI capability is going to mimic what we already see, and really drive what we see on the storage side. We've been showing that graph for years and years of just everything going up into the right but as AI starts working on itself and improving itself, as the collection means keep getting better and more sophisticated, and have increased resolutions whether you're talking about cameras or in life sciences, acquisition. Capabilities just keep getting better and better and the resolutions get better and better. It's more and more data right and you want to be able to expose a wide variety of data to these algorithms. That's how they're going to learn faster. And so what we see is that the data centric part of the infrastructure is going to need the scale even if you're starting today with a small workload. >> Kurt, thank you very much, great conversation. How did this turn into value for users? Well let's take a look at some use cases that come out of these technologies. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides an enable into acceleration for a wide variety of AI and the use cases in any scale. The platform provides tremendous flexibility and supports a wide variety of workflows and data types. Already today, customers in the industry, academia and government all around the globe are leveraging DDN A3I within video DGX-1 for their AI and DL efforts. In this first example used case, DDN A3I enables the life sciences research laboratory to accelerate through microscopic capture and analysis pipeline. On the top half of the slide is the legacy pipeline which displays low resolution results from a microscope with a three minute delay. On the bottom half of the slide is the accelerated pipeline where DDN A3I within the video DGX-1 delivers results in real time. 200 times faster and with much higher resolution than the legacy pipeline. This used case demonstrates how a single unit deployment of the solution can enable researchers to achieve better science and the fastest times to results without the need to build out complex IT infrastructure. The white paper for this example used case is available on the DDN website. In the second example used case, DDN A3I with NVIDIA DGX-1 enables an autonomous vehicle development program. The process begins in the field where an experimental vehicle generates a wide range of telemetry that's captured on a mobile deployment of the solution. The vehicle data is used to train capabilities locally in the field which are transmitted to the experimental vehicle. Vehicle data from the fleet is captured to a central location where a large DDN A3I within video DGX-1 solution is used to train more advanced capabilities, which are transferred back to experimental vehicles in the field. The central facility also uses the large data sets in the repository to train experimental vehicles and simulate environments to further advance the AV program. This used case demonstrates the scalability, flexibility and edge to data center capability of the solution. DDN A3I within video DGX-1 brings together industry leading compute, storage and network technologies, in a fully integrated and optimized package that makes it easy for customers in all industries around the world to pursue break from business innovation using AI and DL. >> Ultimately, this industry is driven by what users must do, the outcomes if you try to seek. But it's always is made easier and faster when you got great partnerships working on some of these hard technologies together. Let's hear how DDN and NVIDIA are working together to try to deliver new classes of technology capable of making these AI workloads scream. Specifically, we've got Kurt Kuckein coming back. He's a senior director of marketing for DDN and Darrin Johnson who is global director of technical marketing for NVIDIA in the enterprise and deep learning. Today, we're going to be talking about what infrastructure can do to accelerate AI. And specifically we're going to use a relationship. A virgin relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter and more focused infrastructure for computing. Now to have this conversation, we've got two great guest here. We've got Kurt Kuckein, who is the senior director of marketing at DDN. And also Darrin Johnson, who's the global director of technical marketing for enterprise at NVIDIA. Kurt, Darrin, welcome to the theCUBE. >> Thank you very much. >> So let's get going on this 'cause this is a very, very important topic, and I think it all starts with this notion of that there is a relationship that you guys put forward. Kurt, why don't you describe. >> Sure, well so what we're announcing today is DDNs, A3I architecture powered by NVIDIA. So it is a full rack level solution, a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply, very completely. >> So if we think about why this is important. AI workloads clearly put special stress on underline technology. Darrin talk to us a little bit about the nature of these workloads and why in particular things like GPUs, and other technologies are so important to make them go fast? >> Absolutely, and as you probably know AI is all about the data. Whether you're doing medical imaging, whether you're doing natural language processing. Whatever it is, it's all driven by the data. The more data that you have, the better results that you get but to drive that data into the GPUs, you need greater IO and that's why we're here today to talk about DDN and the partnership of how to bring that IO to the GPUs on our DGX platforms. >> So if we think about what you describe. A lot of small files often randomly distributed with nonetheless very high profile jobs that just can't stop midstream and start over. >> Absolutely and if you think about the history of high performance computing which is very similar to AI, really IO is just that. Lots of files. You have to get it there. Low latency, high throughput and that's why DDNs probably, nearly 20 years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput, gives you that low latency. Just helps drive the GPU. >> So you mentioned HPC from 20 years of experience. Now it use to be that HPC, you'd have a scientist with a bunch of graduate students setting up some of these big, honking machine. but now we're moving with commercial domain You don't have graduate students running around. You have very low cost, high quality people. A lot of administrators, nonetheless quick people but a lot to learn. So how does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why you don't you-- >> Yeah, that's exactly where this reference architecture comes in. So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI. It's something that's really easily deployable. We fully integrated the solution. DDN has made changes to our parallel file system appliance to integrate directly with the DGX-1 environment. Makes the even easier to deploy from there, and extract the maximum performance out of this without having to run around and tuning a bunch of knobs, change a bunch of settings. It's really going to work out of the box. >> And NVIDIA has done more than the DGX-1. It's more than hardware. You've don't a lot of optimization of different AI toolkits et cetera so talk a little bit about that Darrin. >> Talking about the example that used researchers in the past with HPC. What we have today are data scientists. A scientist understand pie charts, they understand TensorFlow, they understand the frameworks. They don't want to understand the underlying file system, networking, RDM, a InfiniBand any of that. They just want to be able to come in, run their TensorFlow, get the data, get the results, and just keep turning that whether it's a single GPU or 90 DGXs or as many DGXs as you want. So this solution helps bring that to customers much easier so those data scientist don't have to be system administrators. >> So roughly it's the architecture that makes things easier but it's more than just for some of these commercial things. It's also the overall ecosystem. New application fires up, application developers. How is this going to impact the aggregate ecosystem is growing up around the need to do AI related outcomes? >> Well, I think one point that Darrin was getting to there in one of the bigg effects is also as these ecosystems reach a point where they're going to need to scale. There's somewhere where DDN has tons of experience. So many customers are starting off with smaller datasets. They still need the performance, a parallel file system in that case is going to deliver that performance. But then also as they grow, going from one GBU to 90 GXs is going to be an incredible amount of both performance scalability that they're going to need from their IO as well as probably capacity, scalability. And that's another thing that we've made easy with A3I is being able to scale that environment seamlessly within a single name space, so that people don't have to deal with a lot of again tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful. In the end, it is the application that's most important to both of us, right? It's not the infrastructure. It's making the discoveries faster. It's processing information out in the field faster. It's doing analysis of the MRI faster. Helping the doctors, helping anybody who is using this to really make faster decisions better decisions. >> Exactly. >> And just to add to that. In automotive industry, you have datasets that are 50 to 500 petabytes, and you need access to all that data, all the time because you're constantly training and retraining to create better models to create better autonomous vehicles, and you need the performance to do that. DDN helps bring that to bear, and with this reference architecture is simplifies it so you get the value add of NVIDIA GPUs plus its ecosystem software plus DDN. It's match made in heaven. >> Kurt, Darrin, thank you very much. Great conversation. To learn more about what they're talking about, let's take a look at a video created by DDN to explain the product and the offering. >> DDN A3I within video NVIDIA DGX-1 is a fully integrated and optimized technology solution that enables and accelerates end to end data pipelines for AI and DL workloads of any scale. It is designed to provide extreme amounts of performance and capacity backed by a jointly engineered and validated architecture. Compute is the first component of the solution. The DGX-1 delivers over one petaflop of DL training performance leveraging eight NVIDIA tester V100 GPUs in a 3RU appliance. The GPUs are configured in a hybrid cube mesh topology using the NVIDIA and VLink interconnect. DGX-1 delivers linearly predictable application performance and is powered by the NVIDIA DGX software stack. DDN A31 solutions can scale from single to multiple DGX-1s. Storage is a second component of the solution. The DDN and the AI200 is all NVIDIA parallel file storage appliance that's optimized for performance. The AI200 is specifically engineered to keep GPU computing resources fully utilized. The AI200 ensures maximum application productivity while easily managing to update data operations. It's offered in three capacity options and a compact tour U chassis. AI200 appliance can deliver up to 20 gigabytes a second of throughput and 350,000 IOPS. The DDN A3I architecture can scale up and out seamlessly over multiple appliances. The third component of the solution is a high performance, low latency, RDM capable network. Both EDR and InfiniBand, and 100 gigabit ethernet options are available. This provides flexibility, interesting seamless scaling and easy integration of the solution within any IT infrastructure. DDN A3I solutions within video DGX-1 brings together industry leading compute, storage and network technologies in a fully integrated and optimized package that's easy to deploy and manage. It's backed by deep expertise and enables customers to focus on what really matters. Extracting the most value from their data with unprecedented accuracy and velocity. >> Always great to hear the product. Let's hear the analyst's perspective. Now I'm joined by Dave Vellante, who's now with Wikibon, colleague here at Wikibon and co-CEO of SiliconANGLE. Dave welcome to theCUBE. Dave a lot of conversations about AI. What is it about today that is making AI so important to so many businesses? >> Well I think it's three things Peter. The first is the data we've been on this decade long aduped bandwagon and what that did is really focused organizations on putting data at the center of their business, and now they're trying to figure okay, how do we get more value of that? So the second piece of that is technology is now becoming available, so AI of course have been around forever but the infrastructure to support that, GPUs, the processing power, flash storage, deep learning frameworks like TensorFlow have really have started to come to the marketplace. So the technology is now available to act on that data, and I think the third is people are trying to get digital right. This is it about digital transformation. Digital meets data. We talked about that all the time and every corner office is trying to figure out what their digital strategy should be. So there try to remain competitive and they see automation, and artificial intelligence, machine intelligence applied to that data as a lynch pan of their competitiveness. >> So a lot of people talk about the notion of data as a source value in some and the presumption that's all going to the cloud. Is that accurate? >> Oh yes, it's funny that you say that because as you know, we're done a lot of work of this and I think the thing that's important organizations have realized in the last 10 years is the idea of bringing five megabytes of compute to a petabyte of data is far more valuable. And as a result a pendullum is really swinging in many different directions. One being the edge, data is going to say there, and certainly the cloud is a major force. And most of the data still today lives on premises, and that's where most of the data os likely going to stay. And so no all the data is not going to go into the cloud. >> It's not the central cloud? >> That's right, the central public cloud. You can redefined the boundaries of the cloud and the key is you want to bring that cloud like experience to the data. We've talked about that a lot in the Wikibon and Cube communities, and that's all about the simplification and cloud business models. >> So that suggest pretty strongly that there is going to continue to be a relationship between choices about hardware infrastructure on premises, and the success at making some of these advance complex workloads, run and scream and really drive some of that innovative business capabilities. As you think about that what is it about AI technologies or AI algorithms and applications that have an impact on storage decisions? >> Well, the characteristics of the workloads are going to be often times is going to be largely unstructured data that's going to be small files. There's going to a lot of those small files, and they're going to be randomly distributed, and as a result, that's going to change the way in which people are going to design systems to accommodate those workloads. There's going to be a lot more bandwidth. There's going to be a lot more parallelism in those systems in order to accommodate and keep those CPUs busy. And yeah, we're going to talk about but the workload characteristics are changing so the fundamental infrastructure has to change as well. >> And so our goal ultimately is to ensure that we keep these new high performing GPUs saturated by flowing data to them without a lot of spiky performance throughout the entire subsystem. We've got that right? >> Yeah, I think that's right, and that's when I was talking about parallelism, that's what you want to do. You want to be able to load up that processor especially these alternative processors like GPUs, and make sure that they stay busy. The other thing is when there's a problem, you don't want to have to restart the job. So you want to have real time error recovery, if you will. And that's been crucial in the high performance world for a long, long time on terms of, because these jobs as you know take a long, long time to the extent that you don't have to restart a job from ground zero. You can save a lot of money. >> Yeah especially as you said, as we start to integrate some of these AI applications with some of the operational applications that are actually recording your results of the work that's being performed or the prediction that's being made or the recommendation that's been offered. So I think ultimately, if we start thinking about this crucial role that AI workloads is going to have in business and that storage is going to have on AI, move more processes closer to data et cetera. That suggest that there's going to be some changes in the offering for the storage industry. What are your thinking about how storage interest is going to evolve over time? >> Well there's certainly a lot of hardware stuff that's going on. We always talk about software define but they say hardware stuff matters. If obviously flash doors changed the game from a spinning mechanical disc, and that's part of this. Also as I said the day before seeing a lot more parallelism, high bandwidth is critical. A lot of the discussion that we're having in our community is the affinity between HPC, high performance computing and big data, and I think that was pretty clear, and now that's evolving to AI. So the internal network, things like InfiniBand are pretty important. NVIDIA is coming onto the scene. So those are some of the things that we see. I think the other one is file systems. NFS tends to deal really well with unstructured data and data that is sequential. When you have all the-- >> Streaming. >> Exactly, and you have all this what we just describe as random nature and you have the need for parallelism. You really need to rethink file systems. File systems are again a lynch pan of getting the most of these AI workloads, and the others if we talk about the cloud model. You got to make this stuff simple. If we're going to bring AI and machine intelligence workloads to the enterprise, it's got to be manageable by enterprise admins. You're not going to be able to have a scientist be able to deploy this stuff, so it's got to be simple or cloud like. >> Fantastic, Dave Vellante, Wikibon. Thanks for much for being on theCUBE. >> My pleasure. >> We've had he analyst's perspective. Now tells take a look at some real numbers. Not a lot of companies has delivered a rich set of bench marks relating AI, storage and business outcomes. DDN has, let's take a video that they prepared describing the bench mark associated with these new products. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides massive acceleration for AI and DL applications. DDN has engaged extensive performance and interoperable testing programs in close collaboration with expert technology partners and customers. Performance testing has been conducted with synthetic throughputs in IOPS workloads. The results demonstrate that the DDN A3I parallel architecture delivers over 100,000 IOPS and over 10 gigabytes per second of throughput to a single DGX-1 application container. Testing with multiple container demonstrates linear scaling up to full saturation of the DGX-1 Zyo capabilities. These results show concurrent IO activity from four containers with an aggregate delivered performance of 40 gigabytes per second. The DDN A3I parallel architecture delivers true application acceleration, extensive interoperability and performance testing has been completed with a dozen popular DL frameworks on DGX-1. The results show that with the DDN A3I parallel architecture, DL applications consistently achieve a higher training throughput and faster completion times. In this example, Caffe achieves almost eight times higher training throughput on DDN A3I as well it completes over five times faster than when using a legacy file sharing architecture and protocol. Comprehensive test and results are fully documented in the DDN A3I solutions guide available from the DDN website. This test illustrates the DGX-1 GPU utilization and read activity from the AI 200 parallel storage appliance during a TensorFlow training integration. The green line shows that the DGX-1 be used to achieve maximum utilization throughout the test. The red line shows the AI200 delivers a steady stream of data to the application during the training process. In the graph below, we show the same test using a legacy file sharing architecture and protocol. The green line shows that the DGX-1 never achieves full GPU utilization and that the legacy file sharing architecture and protocol fails to sustain consistent IO performance. These results show that with DDN A3I, this DL application on the DGX-1 achieves maximum GPU product activity and completes twice as fast. This test then resolved is also documented in the DDN A3I solutions guide available from the DDN website. DDN A3I solutions within video DGX-1 brings together industry meaning compute, storage and network technologies in a fully integrated and optimized package that enables widely used DL frameworks to run faster, better and more reliably. >> You know, it's great to see real benchmarking data because this is a very important domain, and there is not a lot of benchmarking information out there around some of these other products that are available but let's try to turn that benchmarking information into business outcomes. And to do that we've got Kurt Kuckein back from DDN. Kurt, welcome back. Let's talk a bit about how are these high value outcomes That seeks with AI going to be achieved as a consequence of this new performance, faster capabilities et cetera. >> So there is a couple of considerations. The first consideration, I think, is just the selection of AI infrastructure itself. Right, we have customers telling us constantly that they don't know where to start. Now they have readily available reference architectures that tell them hey, here's something you can implement, get installed quickly, you're up and running your AI from day one. >> So the decision process for what to get is reduced. >> Exactly. >> Okay. >> Number two is, you're unlocking all ends of the investment with something like this, right. You're maximizing the performance on the GPU side, you're maximizing the performance on the ingest side for the storage. You're maximizing the throughput of the entire system. So you're really gaining the most out of your investment there. And not just gaining the most out of your investment but truly accelerating the application and that's the end goal, right, that we're looking for with customers. Plenty of people can deliver fast storage but if it doesn't impact the application and deliver faster results, cut run times down then what are you really gaining from having fast storage? And so that's where we're focused. We're focused on application acceleration. >> So simpler architecture, faster implementation based on that, integrated capabilities, ultimately, all revealing or all resulting in better application performance. >> Better application performance and in the end something that's more reliable as well. >> Kurt Kuckein, thanks so much for being on theCUBE again. So that's ends our prepared remarks. We've heard a lot of great stuff about the relationship between AI, infrastructure especially storage and business outcomes but here's your opportunity to go into crowd chat and ask your questions get your answers, share your stories, engage your peers and some of the experts that we've been talking with about this evolving relationship between these key technologies, and what it's going to mean for business. So I'm Peter Burris. Thank you very much for listening. Let's step into the crowd chat and really engage and get those key issues addressed.
SUMMARY :
and over the course of the next hour, It can mean the scaling of performance. in the high performance world. A lot of companies are coming to us. and let's explore some of the differences. So the biggest similarity I think is so that you can get to the data you need Keeping the GPU saturated is really the key. of the amount of capacity, and the degree of some pushing that's required to make sure on the HPC side as well, and as you posited at the beginning of the relationship between AI and storage? of the infrastructure is going to need the scale that come out of these technologies. in the repository to train experimental vehicles of technical marketing for NVIDIA in the enterprise and I think it all starts with this notion of that there is and fully tested to deliver an AI infrastructure Darrin talk to us a little bit about the nature of how to bring that IO to the GPUs on our DGX platforms. So if we think about what you describe. Absolutely and if you think about the history but a lot to learn. Makes the even easier to deploy from there, And NVIDIA has done more than the DGX-1. in the past with HPC. So roughly it's the architecture that makes things easier so that people don't have to deal with a lot of DDN helps bring that to bear, to explain the product and the offering. and easy integration of the solution Let's hear the analyst's perspective. So the technology is now available to act on that data, So a lot of people talk about the notion of data And so no all the data is not going to go into the cloud. and the key is you want to bring and the success at making some of these advance so the fundamental infrastructure has to change as well. by flowing data to them without a lot And that's been crucial in the high performance world and that storage is going to have on AI, A lot of the discussion that we're having in our community and the others if we talk about the cloud model. Thanks for much for being on theCUBE. describing the bench mark associated and read activity from the AI 200 parallel storage appliance And to do that we've got Kurt Kuckein back from DDN. is just the selection of AI infrastructure itself. and that's the end goal, right, So simpler architecture, and in the end something that's more reliable as well. and some of the experts that we've been talking
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Kurt Kuckein | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Kurt | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
200 times | QUANTITY | 0.99+ |
Darrin | PERSON | 0.99+ |
October 11, 2018 | DATE | 0.99+ |
DDN | ORGANIZATION | 0.99+ |
Darrin Johnson | PERSON | 0.99+ |
50 terabyte | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
second piece | QUANTITY | 0.99+ |
third component | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
DNN | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
second component | QUANTITY | 0.99+ |
90 GXs | QUANTITY | 0.99+ |
first component | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three minute | QUANTITY | 0.99+ |
AI200 | COMMERCIAL_ITEM | 0.98+ |
over 40 years | QUANTITY | 0.98+ |
first example | QUANTITY | 0.98+ |
DGX-1 | COMMERCIAL_ITEM | 0.98+ |
100 gigabit | QUANTITY | 0.98+ |
500 petabytes | QUANTITY | 0.98+ |
V100 | COMMERCIAL_ITEM | 0.98+ |
30, 40 years | QUANTITY | 0.98+ |
second example | QUANTITY | 0.97+ |
NIVIDA | ORGANIZATION | 0.97+ |
over 100,000 IOPS | QUANTITY | 0.97+ |
SiliconANGLE | ORGANIZATION | 0.97+ |
AI 200 | COMMERCIAL_ITEM | 0.97+ |
first consideration | QUANTITY | 0.97+ |
three things | QUANTITY | 0.96+ |