Image Title

Search Results for InfiniData:

9_20_18 with Peter, Kuckein & Johnson DDN


 

>> What up universe? Welcome to our theCUBE conversation from our fantastic studios in beautiful Palo Alto, California. Today we're going to be talking about what infrastructure can do to accelerate AI. And specifically we're going to use a relationship, a burgeoning relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter, and more focused infrastructure for computing. Now to have this conversation, we've got two great guests, here. We've got Kurt Kuckein, who's the senior director of marketing at DDN. And also Darren Johnson, who's the global director of technical marketing for Enterprise and NVIDIA. Kurt, Darren, welcome to theCUBE. >> Thanks For having us. >> Thank you very much. >> So let's get going on this because this is a very, very important topic. And I think it all starts with this notion of that there is a relationship that you guys put forth. Kurt, why don't you describe it. >> So what we're announcing today is the ends A3I architecture, powered by NVIDIA. So it is a full, rack-level solution, a reference to architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply very completely. >> So if we think about how this or why this is important, AI workloads clearly have a special stress on underlying technology. Darren, talk to us a little bit about the nature of these workloads, and why in particular, things like GPU's and other technologies are so important to make them go fast. >> Absolutely. And as you probably know AI is all about the data. Whether you're doing medical imaging, or whether your doing actual language processing, whatever it is, it's all driven by the data. The more data that you have, the better results that you get. But to drive that data into the GPU's, you need great IO. And that's why we're here today, to talk about DDN and the partnership and how to bring that IO to the GPU's on our DJX platforms. >> So if we think about what you describe, a lot of small files, often randomly distributed, with nonetheless very high profile jobs that just can't stop this dream and start over. >> Absolutely. And if you think about the history of high-performance computing, which is very similar to AI, really IO is just that, lots of files, you have to get it there, low latency, high throughput and that's why DDN's probably nearly 20 years of experience working in that exact same domain is perfect. Because you get the parallel file system which gives you that throughput, gives you that low latency, just helps drive the GPU. >> So you mentioned HPC from twenty years of experience, now, it used to be that HPC you'd have some scientists with a bunch of graduate students, setting up some of these big, honking machines. But now we're moving with commercial domain. You don't have graduate students running around. You don't have very low cost, high quality people here. So, you know, there's a lot of administrators who nonetheless good people, but want to learn. So, how does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why don't- >> That's exactly where this reference architecture comes in right. So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI, It's something that's really easily deployable. We've fully integrated this solution. DDN has made changes to our parallel file system appliance to integrate directly within the DGX-1 environment. That makes that even easier to deploy from there. And extract the maximum performance out of this without having to run around and tune a bunch of knobs, change a bunch of settings, it's really going to work out of the box. >> And you know it's really done more than just the DGX-1, it's more than hardware. You've done a lot of optimization of different AI toolkits, et cetera et cetera. Talk a little about that Darren. >> Yeah so, I mean, talking about the example used, researchers in the past with HPC, what we have today are data scientists. Data scientists understand pi charts, they understand tenser flow, they understand the frameworks. They don't want to understand the underlying file system, networking, RDMA, InfiniBand, any of that. They just want to be able to come in, run their tenser flow, get the data, get the results. And just churn that, keep churning that, whether it's a single GPU or 90 DJX's or as many DJX's as you want. So this solution helps bring that to customers much easier so those data scientists don't have to be system administrators. >> So, reference architecture that makes things easier. But it's more than just for some of these commercial things. It's also the overall ecosystem, you have application providers, application developers. How is this going to impact the average ecosystem that's growing up around the need to do AI related outcomes? >> Well, I think the one point that Darren was getting to there, and one of the big impacts is also as these ecosystems reach a point where they're going to need a scale. There's somewhere where DDN has tons of experience. So many customers are starting off with smaller data sets, they still need the performance, the parallel file system in that case is going to deliver that performance. But then also, as they grow, going from one GPU to 90 DJX's is going to be an incredible amount of both performance scalability that they're going to need from their IO, as well as probably capacity, scalability. And that's another thing that we've made easy with A3I, is being able to scale that environment seamlessly, within a single name space so that people don't have to deal with a lot of, again, tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as their successful. In the end, it is the application that's most important to both of us. It's not the end of a structure, it's making the discoveries faster, it's processing the information out in the field faster, it's doing analysis of the MRI faster, and helping the doctor, helping anybody who's using this to really make faster decisions, better decisions. >> Exactly. And just to add to that, in automotive industry, you have data sets that are from 50 to 500 petabytes, and you need access to all that data, all the time, because you're constantly training and retraining to create better models, to create better autonomous vehicles. And you need the performance to do that. DDN helps bring that to bear, and with this reference architecture, simplifies it. So you get the value add of InfiniData GPU's plus its ecosystem is software plus DDN is a match made in Heaven. >> Darren Johnson, NVIDIA, Kurt Kuckein, DDN. Thanks very much for being on theCube. >> Thank you very much. >> Glad I could be here. >> And I'm Peter Burns, and once again I'd like to thank you for watching this Cube Conversation. Until next time.

Published Date : Sep 28 2018

SUMMARY :

and NVIDIA to describe what we can do of that there is a relationship that you guys put forth. a reference to architecture that's been Darren, talk to us a little bit about the nature But to drive that data into the GPU's, you need great IO. So if we think about what you describe, lots of files, you have to get it there, low latency, So you mentioned HPC from twenty years of experience, change a bunch of settings, it's really going to work And you know it's really done more than just the DGX-1, that to customers much easier so those data scientists How is this going to impact the average ecosystem in that case is going to deliver that performance. that are from 50 to 500 petabytes, and you need access Thanks very much for being on theCube. And I'm Peter Burns, and once again I'd like to thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NVIDIAORGANIZATION

0.99+

DDNORGANIZATION

0.99+

KurtPERSON

0.99+

Kurt KuckeinPERSON

0.99+

Darren JohnsonPERSON

0.99+

DarrenPERSON

0.99+

twenty yearsQUANTITY

0.99+

Peter BurnsPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

bothQUANTITY

0.99+

TodayDATE

0.99+

50QUANTITY

0.99+

90 DJXQUANTITY

0.98+

500 petabytesQUANTITY

0.98+

todayDATE

0.98+

two great guestsQUANTITY

0.97+

one GPUQUANTITY

0.97+

oneQUANTITY

0.96+

one pointQUANTITY

0.95+

nearly 20 yearsQUANTITY

0.94+

InfiniDataORGANIZATION

0.92+

single nameQUANTITY

0.86+

theCUBEORGANIZATION

0.83+

DGX-1TITLE

0.83+

A3IOTHER

0.82+

PeterPERSON

0.78+

single GPUQUANTITY

0.7+

JohnsonPERSON

0.54+

KuckeinPERSON

0.51+

A3ITITLE

0.5+