Image Title

Search Results for AI200:

Mark Lohmeyer, VMware | VMworld 2019


 

>> Narrator: Live from San Francisco, celebrating 10 years of high tech coverage, it's theCUBE, covering VMworld 2019. Brought to you by VMware and its ecosystem partners. >> Well, welcome back everyone. Live CUBE coverage here in San Francisco, California for VMworld 2019. I'm John Furrier, Dave Vellante, Dave 10 years continues, day one of three days of wall to wall coverage. Mark Lohmeyer, Senior Vice President, Cloud Platform Business Unit and general manager at the VMware, manage cloud for VMware. Great to see you again. >> Great to see you, yeah, thank you. >> So you got, you're managing all the VMware manage cloud on AWS and Dell EMC? >> Right. >> Which was a big part of today's keynote. Obviously a big part of your investments, so you know, you always look at someone's commitment to something. How they spend their resources and their time. So give us an update obviously a lot of resources on the VMware side. >> Mark: Right. >> To make this run, what customers want. Give us an update on what's going on. >> Yeah, yeah I mean so first of all VMware Cloud and AWS, I mean, we're really pleased with the momentum we're seeing for that in the marketplace. So, we compared what it looks like today versus a year ago. And we were talking about it, a year ago and we've increased the number of customers by 4x on the service. We've increased the numbers of VM's on the service by 9x. That's kind of interesting 'cause it shows you that you know, we're adding both new customers as well as existing customers are expanding their investment. So, that's great to see, right? And it's powered by a lot of the compelling Use Cases. You may have heard Pat or others talk about most notably, cloud migrations. You know from an investment perspective which is I think where you sort of started the question you know, significant investment from both VMware as well as AWS the end of the service. You know we say it's jointly engineered and that is absolutely the case. I mean we literally have hundreds of engineers that are optimizing the VMware software to be delivered as a service on top of the AWS infrastructure. >> And that's a lot just to get nuance on this point. Because in the press coverage, I've seen all the press coverage from the Microsoft and the Google. This is different than just Cloud Foundation because you're talking about something completely different. This is jointly engineered. These are specific, unique things. >> Yeah, I mean with the sort of distinction I would sort of articulate there is that in the case of VMware Cloud on AWS, it's a VMware managed, operated, supported, delivered service. Right, so it's our engineers that are pushing the bits into production in AWS. It's our engineers if there's an incident that deal with the you know, with the situation. You know, it's literally a service operated by us. In the case of what we're doing with Azure and GCP, you know first of all from a customer perspective what we heard them telling us is, I think many customers are using Azure, many customers are using GCP and they'd like to have the ability to have that same VMware consistent software stack on those clouds. But the operational model is different. So in those two cases there's a partner called CloudSimple. Who's a VCPP partner and they're taking our standard VMware Cloud Foundation software that customers use on Prem and they are operating and delivering that as a Cloud service on top of those Cloud platforms. >> Just to review so VMware Cloud on AWS and Outposts both your responsibility, there's two way street there? >> Yup. (laughing) >> Which is rare with Amazon usually it's a one way street. My words not yours. But so, and, so you manage both sides of that? Is that correct? >> Mark: Yeah, that's right, that's right. >> So you're happy to sell either one? >> Absolutely, yup. >> Right, and then the Dell EMC version is kind of the on Prem version of Outposts, if you will. Is that a fair characterization? >> Mark: Yeah, yeah, so. >> Without the public cloud. >> Yeah, I mean absolutely, I think one of the interesting things was you know, we've been in market now with the VMware Cloud on AWS for a couple years. And, you know it's going great but one of the things we've heard from customers was, "Hey, we sort of really like this VMware managed cloud model where you're taking all of the heavy lifting of worrying about the Lifecycle of the VMware software. Worrying about the you know upgrades to the hardware, you're taking that all off of our plate. But why can't we have that same cloud delivery model back on Prem?", right and so, that was the impetus for what we originally announced as Project Dimension and now we're launching this week as VMware Cloud on Dell EMC. >> So all the benefits go with the Dell infrastructure hardware? >> So, I got to ask you, so one are the attributes of those those solutions, is they're highly homogenous, right? And, Andy Jassy made a big deal about that same Control Plane, same Data Plane. >> Mark: Right. >> So my question is, help me square the circle with MultiCloud which is highly heterogeneous? (laughing) So, can I have my cake and eat it, too? Can I have this, you know unified vision of the world? This controlled, same compliance, government security, EDIx, management etc, and have all this heterogeneity? How does that? >> Yeah, so I think, I mean to me it starts from what the customer would like to do, right? And what we're seeing from customers is it's increasingly a MultiCloud world, right. That expands spans private cloud, public cloud and Ed. >> Dave: You're smiling when you say that. >> Mark: Yeah, now, now-- >> The chaos is an opportunity for VM. (laughing) >> Yeah, but it's a challenge for customers, right? And so, if you look at how VMware is trying to help there if you say sort of square the circle. I think that first piece is this idea of consistent operations, right. Then we have these management tools that you can use to consistently operate those environments, whether they're based on a VMware based infrastructure or whether they're based on a native cloud infrastructure. Right, so if you look at our cloud health platform for example, it's a great example where that service can help you under, get visibility to your cloud spend across different cloud platforms. Also B service platforms. It can help you reduce that spend over time. So that's sort of what we refer to as consistent operations. Right, which can span any cloud. You know what my team is responsible for is more in the consistent infrastructures base and that's really all about how do we deliver consistent compute network and storage service that spans on Prem, multiple public clouds and Edge. So that's really where we're bringing that same VMware Cloud Foundation stack to all those different environments. >> Mark, I want to get your thoughts on what Pat Gelsinger said on the keynote. He said, "modernize and migrate or migrate and modernize" he also mentioned live migrate as a big feature. >> Mark: Yes, yes. >> On the modernize and migrate and migrate then modernize, they basically pick one and people are doing both. >> Mark: Right, right, right. >> What's he mean by that give us some examples and then what's the impact to the customer? Is it just the behavior of the customer? >> Yeah, I mean, it varies a little bit based on what the customer's trying to accomplish. But you know the one thing I'll say is that, you know, historically it was a little bit tough to have that choice. Right, so you know the sort of the thought was, hey I have to like re-factor and re-platform everything upfront just to be able to get it to the public cloud. And then once it's there I can sort of start to modernize. I think in that can be a multi-year process, right? >> Yeah. >> I think one of the really interesting opportunities that we've opened up for customers with VMware Cloud on AWS is you don't necessarily have to re-factor everything just to be able to get to the public cloud. We could help them migrate to the public cloud very quickly without requiring any changes if they don't want to. And then when they're there, they can modernize at their own pace based on the needs of the business. All right, and so I think having that additional option is actually quite useful for customers that want to get to the cloud quickly and then from there begin to modernize. >> So two main paths with migration and modernize as the easiest one given the managed service. >> Yeah, yeah, and but you know that being said, I think also you see a set of customers that say "Look, sort of digital transformation and modernization is my primary goal." Right, and for them by enabling some of these things like Native Kupernetes as a service in vSphere and in VMware Cloud and AWS by enabling this AI and ML workloads with a Nivida partnership for that classic customers, they can also just start with the modernization piece, right? Directly on the-- >> So the migrate to modernize would be a lift in shift essentially and then modernize? >> Mark: Ah-hm. >> And that's what Amazon wants you to do? But, you're giving customers a choice, is what I'm-- >> Mark: We have, yeah no, I mean look at the end of the day I think both VMware and AWS believe strongly in understanding what customers are looking for and making sure we're delivering that value to them. And I think you know, this is one of the compelling new options that we've enabled for customers, I think with VMworld Cloud on AWS is that we could take a migration project that would have previously taken three years and we could do it in a few months. >> You know Mark I had a chance to talk to Carl Eschenbach two weeks ago before the show. He came in for an interview Sequoia Capital, Carl Eschenbach, former COO of VMware been there for years. He was part of the deal with AWS, graphing that deal. We were talking about the moment and time where your stock price started to move up this October 2016. That's right when the deal was announced. Since then the stock price has been up. For a lot of reasons, we've talked on theCUBE before. The question I have for you is, what have you learned? What surprises you from this relationship? Because one the clarity was easy, meet Cloud Air, no more. This is our cloud strategy. All on AWS and MultiCloud as it develops you certainly have had to clarify with customers. But now that you entered the managed service, what new things have popped up that might not have been on your radar? What did you expect? What are some surprises from this relationship from a customer behavior standpoint? >> Yeah, that's a real interesting question. So, I think you know in the early days we sort of had this concept of "Hey, let's enable the full VMware capabilities on AWS." And we were sort of talking about it as a tech, almost like a technical solution, right? And what, what we could enable. I think sort of what quickly became apparent is hey, sort of behind that technical approach there's actually some really compelling Use Cases here. And I think that, if I think back to two years ago, I don't think we fully anticipated how compelling this cloud migration Use Case would be. I mean I don't think we really realized internally within VMware how hard it was for customers before to do that. And, I think customers didn't realize sort of how much easier and faster and lower cost that we could make it for them with this type of service. So I think that one, although we were maybe talking about it a little bit in the early days. I think it surprised me at least at how sort of broad based the customer interest was in that type of capability. >> Any other broader market interest on things that were surprises or not surprises that are compelling? >> I mean, you know the other thing I wouldn't say it's a surprise per se, but I mean, I think the partnership with AWS has been fantastic. Right, I mean we sort of went into it, I think in the right way between Pat and Andy and focused on doing something meaningful together. The relationship has only gotten sort of deeper and deeper over time. And, one of the interesting things about it is that relationship spans not just engineering and product management and product strategy which is sort of my neck of the woods. But also the marketing organizations, the sales organizations, the support organizations. So it's, it's become I think a very deep partnership. We're able to speak to each other very openly and trying to solve together the, you know the problems that customers are putting in front of us. >> And what's with Outposts, what's the new update on Outposts? >> Yeah, yeah so you know no news on Outposts today obviously but we're working very closely with AWS to enable the VMware Cloud on AWS Outposts model second half of this year. And, the customer interest has just been fantastic, right. And in many ways it's basically the exact same value prop of VMC on AWS in terms-- >> In reverse. >> But, but in reverse and anywhere you want, right, at your door step, right, any Edge, any data center, so. >> I got to ask you, back to the AWS relationship. We were big fans of it always have been. Learned from both sides and believe in it. Having said that, EC2 is the bread and butter for Amazon despite it's hundreds and hundreds of services. That's where their revenue comes from, and compute, your compute business is you know, significant. So my question is, is it a zero some game long-term or when you look at the tam do you see all these other services that you can sell longer term providing you know, the growth engine for your respective companies? Or, does this whole you know, rising tide lift both boats, what are your thoughts on that? >> Yeah, I mean it's clearly rising tide lifts both boats. I mean, again I'll, I always bring it back to the customer right, 'cause that's the way I like to view the world and AWS-- >> And you've got some evidence now that's why I'm asking. >> Yeah and I mean what you're seeing is actually I mean if you take some of these customer examples. Let me give you one from the UK. So, Stagecoach. I don't know if you heard about these guys. But they're a major, so they provide transportation services in the UK, and other countries as well. So, they run a network of buses, trains and they're responsible for the transportation of three million commuters every day in the UK. So, they have this really mission critical application that they're building that is basically responsible for scheduling those buses and those trains and scheduling the conductors and the operators. So you can imagine this application is super mission critical for their business, right. And, they chose to run that application on VMware Cloud on AWS and one of the reasons they chose that is because we have a unique capability called stretch clustering. >> Sure. >> Which says "Hey, even if there's an issue in one AZ we can restart that application in a second AZ. So there's a really good reason for the customer to choose it. But now back to your question, right? If you think about the opportunity in that for both VM or in AWS, it's meaningful, right? You know, for us, we're selling the entire VMware Cloud on AWS service to that customer across those two AZ's for mission critical workload that's core to their business. For AWS, they're able to of course not only supply the infrastructure that we run on top of but also as that customer looks to do more interesting things they can attach an additional native AWS services, right? So, you know I think that's a great example where delivering value to the customer and if you focus on that the right things will kind of flow back to the companies that help make that possible. >> Good partnering helps you reduce friction and get to market faster. Thinking about the intense effort that both you know, Pat's described, Andy Jassy described, you've described in terms of that partnership, that deep engineering. Can you do multiples of those or is it that you don't because of the respect for the partnership or is it too intense and it's too resource intensive? How many of these types of partnerships can you actually have? >> Well I mean and I think Pat has said it pretty clearly, right? I mean AWS is our primary preferred partner, right. And, what we're doing with them is very unique, right? And it's something that we want to make sure that we have the right level of investment in and that we do an amazingly good job of, right. And I think they feel the same way. And so having that focus together between the two companies. I think is what, has allowed us to be you know, achieve some of the level of success we've had to date and we expect to do that going forward. >> Mark, final question for you. What's your objective this year in your business unit? What's your focus? What are some of the things that you're working on that people should know about? >> Yeah, so first of all. I had VMware Cloud VD but that's just to wrap that up I think the big thing we're focused on going forward is really this modernization kind of piece of the story. How do we enable Native Kupernetes in the service? How do we enable ML and AI workloads in this service? How do we do a better job of connecting to all of the AWS services? So, you're going to see a big kind of focus, there. Beyond VMware Cloud AWS, I mean we're really excited about bringing this VMC model back on Prem both with Dell and on top of AWS Outposts. I mean the customer interest has been, you know fantastic, right? And, you think about all the reasons that customers want to be able to run their applications, you know on Prem, data locality, latency, compliance, all sorts of really good reasons. We think that those services have really hit a sweet spot of that market. >> IT as a managed service, what an interesting idea, don't you think? (laughing) >> Mark: Yeah. >> Whole nother level same game, whole new ball game, right? >> Absolutely! >> Mark, thanks for sharing your insight. Congratulations on your success and we'll be following it. VMware Manage Solutions AWS certainly a big hit. Changed the game for the company and now they're bringing Dell EMC among other potential business model opportunities for customers. As Cloud 2.0 comes as theCUBE's coverage. Live at VMworld 2019, be right back with more from San Francisco after this short break. (bright music)

Published Date : Aug 26 2019

SUMMARY :

Brought to you by VMware and its ecosystem partners. Great to see you again. so you know, you always look To make this run, what customers want. and that is absolutely the case. Because in the press coverage, I've seen all that deal with the you know, with the situation. But so, and, so you manage both sides of that? the on Prem version of Outposts, if you will. of the interesting things was you know, we've been So, I got to ask you, so one are the attributes Yeah, so I think, I mean to me it starts The chaos is an opportunity for VM. to help there if you say sort of square the circle. on what Pat Gelsinger said on the keynote. On the modernize and migrate and migrate Right, so you know the sort of the thought was, hey is you don't necessarily have to re-factor everything as the easiest one given the managed service. I think also you see a set of customers And I think you know, this is one of But now that you entered the managed service, So, I think you know in the early days we sort of had I mean, you know the other thing I wouldn't say Yeah, yeah so you know no news on Outposts today obviously But, but in reverse and anywhere you want, right, you know, the growth engine for your respective companies? I mean, again I'll, I always bring it back to the customer I don't know if you heard about these guys. for the customer to choose it. Thinking about the intense effort that both you know, I think is what, has allowed us to be you know, What are some of the things that you're working on I mean the customer interest has been, Changed the game for the company

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark LohmeyerPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AndyPERSON

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

PatPERSON

0.99+

Pat GelsingerPERSON

0.99+

Carl EschenbachPERSON

0.99+

MarkPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

UKLOCATION

0.99+

DavePERSON

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

VMwareORGANIZATION

0.99+

hundredsQUANTITY

0.99+

DellORGANIZATION

0.99+

StagecoachORGANIZATION

0.99+

two companiesQUANTITY

0.99+

Sequoia CapitalORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

a year agoDATE

0.99+

NividaORGANIZATION

0.99+

first pieceQUANTITY

0.99+

October 2016DATE

0.99+

both sidesQUANTITY

0.99+

both boatsQUANTITY

0.99+

Cloud 2.0TITLE

0.99+

bothQUANTITY

0.99+

Cloud FoundationORGANIZATION

0.99+

San Francisco, CaliforniaLOCATION

0.99+

three yearsQUANTITY

0.99+

oneQUANTITY

0.99+

two weeks agoDATE

0.99+

DDN Chrowdchat | October 11, 2018


 

(uptempo orchestral music) >> Hi, I'm Peter Burris and welcome to another Wikibon theCUBE special feature. A special digital community event on the relationship between AI, infrastructure and business value. Now it's sponsored by DDN with participation from NIVIDA, and over the course of the next hour, we're going to reveal something about this special and evolving relationship between sometimes tried and true storage technologies and the emerging potential of AI as we try to achieve these new business outcomes. So to do that we're going to start off with a series of conversations with some thought leaders from DDN and from NVIDIA and at the end, we're going to go into a crowd chat and this is going to be your opportunity to engage these experts directly. Ask your questions, share your stories, find out what your peers are thinking and how they're achieving their AI objectives. That's at the very end but to start, let's begin the conversation with Kurt Kuckein who is a senior director of marketing at DDN. >> Thanks Peter, happy to be here. >> So tell us a little bit about DNN at the start. >> So DDN is a storage company that's been around for 20 years. We've got a legacy in high performance computing, and that's what we see a lot of similarities with this new AI workload. DDN is well known in that HPC community. If you look at the top 100 super computers in the world, we're attached to 75% of them. And so we have the fundamental understanding of that type of scalable need, that's where we're focused. We're focused on performance requirements. We're focused on scalability requirements which can mean multiple things. It can mean the scaling of performance. It can mean the scaling of capacity, and we're very flexible. >> Well let me stop you and say, so you've got a lot of customers in the high performance world. And a lot of those customers are at the vanguard of moving to some of these new AI workloads. What are customer's saying? With this significant engagement that you have with the best and the brightest out there. What are they saying about this transition to AI? >> Well I think it's fascinating that we have a bifurcated customer base here where we have those traditionalist who probably have been looking at AI for over 40 years, and they've been exploring this idea and they've gone to the peaks and troughs in the promise of AI, and then contraction because CPUs weren't powerful enough. Now we've got this emergence of GPS in the super computing world. And if you look at how the super computing world has expanded in the last few years. It is through investment in GPUs. And then we've got an entirely different segment which is a much more commercial segment, and they may be newly invested in this AI arena. They don't have the legacy of 30, 40 years of research behind them, and they are trying to figure out exactly what do I do here. A lot of companies are coming to us. Hey, I have an AI initiative. Well, what's behind it? We don't know yet but we've got to have something, and they don't you understand where is this infrastructure going to come from. >> So a general availability of AI technologies and obviously flash has been a big part of that. Very high speed networks within data centers. Virtualization certainly helps as well. Now opens up the possibility for using these algorithms, some of which have been around for a long time that require very specialized bespoke configurations of hardware to the enterprise. That still begs the question. There are some differences between high performance computing workloads and AI workloads. Let's start with some of the, what are the similarities and let's explore some of the differences. >> So the biggest similarity I think is it's an intractable hard IO problem. At least from the storage perspective, it requires a lot of high throughput. Depending on where those idle characteristics are from. It can be a very small file, high opt intensive type workflows but it needs the ability of the entire infrastructure to deliver all of that seamlessly from end to end. >> So really high performance throughput so that you can get to the data you need and keep this computing element saturated. >> Keeping the GPU saturated is really the key. That's where the huge investment is. >> So how do AI and HPC workloads differ? >> So how they are fundamentally different is often AI workloads operate on a smaller scale in terms of the amount of capacity, at least today's AI workloads, right? As soon as a project encounter success, what our forecast is is those things will take off and you'll want to apply those algorithm games bigger and bigger data sets. But today, we encounter things like 10 terabyte data sets, 50 terabyte data sets, and a lot of customers are focused only on that but what happens when you're successful? How you scale your current infrastructure to petabytes and multi petabytes when you'll need it in the future. >> So when I think of HPC, I think of often very, very big batch jobs. Very, very large complex datasets. When I think about AI, like image processing or voice processing whatever else it might be. Like for a lot of small files randomly access that require nonetheless some very complex processing that you don't want to have to restart all the time and the degree of some pushing that's required to make sure that you have the people that can do. Have I got that right? >> You've got that right. Now one, I think misconception is on the HPC side, that whole random small file thing has come in in the last five, 10 years, and it's something DDN have been working on quite a bit. Our legacy was in high performance throughput workloads but the workloads have evolved so much on the HPC side as well, and as you posited at the beginning so much of it has become AI and deep learning research. >> Right, so they look a lot more alike. >> They do look a lot more alike. >> So if we think about the revolving relationship now between some of these new data first workloads, AI oriented change the way the business operates type of stuff. What do you anticipate is going to be the future of the relationship between AI and storage? >> Well, what we foresee really is that the explosion in AI needs and AI capability is going to mimic what we already see, and really drive what we see on the storage side. We've been showing that graph for years and years of just everything going up into the right but as AI starts working on itself and improving itself, as the collection means keep getting better and more sophisticated, and have increased resolutions whether you're talking about cameras or in life sciences, acquisition. Capabilities just keep getting better and better and the resolutions get better and better. It's more and more data right and you want to be able to expose a wide variety of data to these algorithms. That's how they're going to learn faster. And so what we see is that the data centric part of the infrastructure is going to need the scale even if you're starting today with a small workload. >> Kurt, thank you very much, great conversation. How did this turn into value for users? Well let's take a look at some use cases that come out of these technologies. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides an enable into acceleration for a wide variety of AI and the use cases in any scale. The platform provides tremendous flexibility and supports a wide variety of workflows and data types. Already today, customers in the industry, academia and government all around the globe are leveraging DDN A3I within video DGX-1 for their AI and DL efforts. In this first example used case, DDN A3I enables the life sciences research laboratory to accelerate through microscopic capture and analysis pipeline. On the top half of the slide is the legacy pipeline which displays low resolution results from a microscope with a three minute delay. On the bottom half of the slide is the accelerated pipeline where DDN A3I within the video DGX-1 delivers results in real time. 200 times faster and with much higher resolution than the legacy pipeline. This used case demonstrates how a single unit deployment of the solution can enable researchers to achieve better science and the fastest times to results without the need to build out complex IT infrastructure. The white paper for this example used case is available on the DDN website. In the second example used case, DDN A3I with NVIDIA DGX-1 enables an autonomous vehicle development program. The process begins in the field where an experimental vehicle generates a wide range of telemetry that's captured on a mobile deployment of the solution. The vehicle data is used to train capabilities locally in the field which are transmitted to the experimental vehicle. Vehicle data from the fleet is captured to a central location where a large DDN A3I within video DGX-1 solution is used to train more advanced capabilities, which are transferred back to experimental vehicles in the field. The central facility also uses the large data sets in the repository to train experimental vehicles and simulate environments to further advance the AV program. This used case demonstrates the scalability, flexibility and edge to data center capability of the solution. DDN A3I within video DGX-1 brings together industry leading compute, storage and network technologies, in a fully integrated and optimized package that makes it easy for customers in all industries around the world to pursue break from business innovation using AI and DL. >> Ultimately, this industry is driven by what users must do, the outcomes if you try to seek. But it's always is made easier and faster when you got great partnerships working on some of these hard technologies together. Let's hear how DDN and NVIDIA are working together to try to deliver new classes of technology capable of making these AI workloads scream. Specifically, we've got Kurt Kuckein coming back. He's a senior director of marketing for DDN and Darrin Johnson who is global director of technical marketing for NVIDIA in the enterprise and deep learning. Today, we're going to be talking about what infrastructure can do to accelerate AI. And specifically we're going to use a relationship. A virgin relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter and more focused infrastructure for computing. Now to have this conversation, we've got two great guest here. We've got Kurt Kuckein, who is the senior director of marketing at DDN. And also Darrin Johnson, who's the global director of technical marketing for enterprise at NVIDIA. Kurt, Darrin, welcome to the theCUBE. >> Thank you very much. >> So let's get going on this 'cause this is a very, very important topic, and I think it all starts with this notion of that there is a relationship that you guys put forward. Kurt, why don't you describe. >> Sure, well so what we're announcing today is DDNs, A3I architecture powered by NVIDIA. So it is a full rack level solution, a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply, very completely. >> So if we think about why this is important. AI workloads clearly put special stress on underline technology. Darrin talk to us a little bit about the nature of these workloads and why in particular things like GPUs, and other technologies are so important to make them go fast? >> Absolutely, and as you probably know AI is all about the data. Whether you're doing medical imaging, whether you're doing natural language processing. Whatever it is, it's all driven by the data. The more data that you have, the better results that you get but to drive that data into the GPUs, you need greater IO and that's why we're here today to talk about DDN and the partnership of how to bring that IO to the GPUs on our DGX platforms. >> So if we think about what you describe. A lot of small files often randomly distributed with nonetheless very high profile jobs that just can't stop midstream and start over. >> Absolutely and if you think about the history of high performance computing which is very similar to AI, really IO is just that. Lots of files. You have to get it there. Low latency, high throughput and that's why DDNs probably, nearly 20 years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput, gives you that low latency. Just helps drive the GPU. >> So you mentioned HPC from 20 years of experience. Now it use to be that HPC, you'd have a scientist with a bunch of graduate students setting up some of these big, honking machine. but now we're moving with commercial domain You don't have graduate students running around. You have very low cost, high quality people. A lot of administrators, nonetheless quick people but a lot to learn. So how does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why you don't you-- >> Yeah, that's exactly where this reference architecture comes in. So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI. It's something that's really easily deployable. We fully integrated the solution. DDN has made changes to our parallel file system appliance to integrate directly with the DGX-1 environment. Makes the even easier to deploy from there, and extract the maximum performance out of this without having to run around and tuning a bunch of knobs, change a bunch of settings. It's really going to work out of the box. >> And NVIDIA has done more than the DGX-1. It's more than hardware. You've don't a lot of optimization of different AI toolkits et cetera so talk a little bit about that Darrin. >> Talking about the example that used researchers in the past with HPC. What we have today are data scientists. A scientist understand pie charts, they understand TensorFlow, they understand the frameworks. They don't want to understand the underlying file system, networking, RDM, a InfiniBand any of that. They just want to be able to come in, run their TensorFlow, get the data, get the results, and just keep turning that whether it's a single GPU or 90 DGXs or as many DGXs as you want. So this solution helps bring that to customers much easier so those data scientist don't have to be system administrators. >> So roughly it's the architecture that makes things easier but it's more than just for some of these commercial things. It's also the overall ecosystem. New application fires up, application developers. How is this going to impact the aggregate ecosystem is growing up around the need to do AI related outcomes? >> Well, I think one point that Darrin was getting to there in one of the bigg effects is also as these ecosystems reach a point where they're going to need to scale. There's somewhere where DDN has tons of experience. So many customers are starting off with smaller datasets. They still need the performance, a parallel file system in that case is going to deliver that performance. But then also as they grow, going from one GBU to 90 GXs is going to be an incredible amount of both performance scalability that they're going to need from their IO as well as probably capacity, scalability. And that's another thing that we've made easy with A3I is being able to scale that environment seamlessly within a single name space, so that people don't have to deal with a lot of again tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful. In the end, it is the application that's most important to both of us, right? It's not the infrastructure. It's making the discoveries faster. It's processing information out in the field faster. It's doing analysis of the MRI faster. Helping the doctors, helping anybody who is using this to really make faster decisions better decisions. >> Exactly. >> And just to add to that. In automotive industry, you have datasets that are 50 to 500 petabytes, and you need access to all that data, all the time because you're constantly training and retraining to create better models to create better autonomous vehicles, and you need the performance to do that. DDN helps bring that to bear, and with this reference architecture is simplifies it so you get the value add of NVIDIA GPUs plus its ecosystem software plus DDN. It's match made in heaven. >> Kurt, Darrin, thank you very much. Great conversation. To learn more about what they're talking about, let's take a look at a video created by DDN to explain the product and the offering. >> DDN A3I within video NVIDIA DGX-1 is a fully integrated and optimized technology solution that enables and accelerates end to end data pipelines for AI and DL workloads of any scale. It is designed to provide extreme amounts of performance and capacity backed by a jointly engineered and validated architecture. Compute is the first component of the solution. The DGX-1 delivers over one petaflop of DL training performance leveraging eight NVIDIA tester V100 GPUs in a 3RU appliance. The GPUs are configured in a hybrid cube mesh topology using the NVIDIA and VLink interconnect. DGX-1 delivers linearly predictable application performance and is powered by the NVIDIA DGX software stack. DDN A31 solutions can scale from single to multiple DGX-1s. Storage is a second component of the solution. The DDN and the AI200 is all NVIDIA parallel file storage appliance that's optimized for performance. The AI200 is specifically engineered to keep GPU computing resources fully utilized. The AI200 ensures maximum application productivity while easily managing to update data operations. It's offered in three capacity options and a compact tour U chassis. AI200 appliance can deliver up to 20 gigabytes a second of throughput and 350,000 IOPS. The DDN A3I architecture can scale up and out seamlessly over multiple appliances. The third component of the solution is a high performance, low latency, RDM capable network. Both EDR and InfiniBand, and 100 gigabit ethernet options are available. This provides flexibility, interesting seamless scaling and easy integration of the solution within any IT infrastructure. DDN A3I solutions within video DGX-1 brings together industry leading compute, storage and network technologies in a fully integrated and optimized package that's easy to deploy and manage. It's backed by deep expertise and enables customers to focus on what really matters. Extracting the most value from their data with unprecedented accuracy and velocity. >> Always great to hear the product. Let's hear the analyst's perspective. Now I'm joined by Dave Vellante, who's now with Wikibon, colleague here at Wikibon and co-CEO of SiliconANGLE. Dave welcome to theCUBE. Dave a lot of conversations about AI. What is it about today that is making AI so important to so many businesses? >> Well I think it's three things Peter. The first is the data we've been on this decade long aduped bandwagon and what that did is really focused organizations on putting data at the center of their business, and now they're trying to figure okay, how do we get more value of that? So the second piece of that is technology is now becoming available, so AI of course have been around forever but the infrastructure to support that, GPUs, the processing power, flash storage, deep learning frameworks like TensorFlow have really have started to come to the marketplace. So the technology is now available to act on that data, and I think the third is people are trying to get digital right. This is it about digital transformation. Digital meets data. We talked about that all the time and every corner office is trying to figure out what their digital strategy should be. So there try to remain competitive and they see automation, and artificial intelligence, machine intelligence applied to that data as a lynch pan of their competitiveness. >> So a lot of people talk about the notion of data as a source value in some and the presumption that's all going to the cloud. Is that accurate? >> Oh yes, it's funny that you say that because as you know, we're done a lot of work of this and I think the thing that's important organizations have realized in the last 10 years is the idea of bringing five megabytes of compute to a petabyte of data is far more valuable. And as a result a pendullum is really swinging in many different directions. One being the edge, data is going to say there, and certainly the cloud is a major force. And most of the data still today lives on premises, and that's where most of the data os likely going to stay. And so no all the data is not going to go into the cloud. >> It's not the central cloud? >> That's right, the central public cloud. You can redefined the boundaries of the cloud and the key is you want to bring that cloud like experience to the data. We've talked about that a lot in the Wikibon and Cube communities, and that's all about the simplification and cloud business models. >> So that suggest pretty strongly that there is going to continue to be a relationship between choices about hardware infrastructure on premises, and the success at making some of these advance complex workloads, run and scream and really drive some of that innovative business capabilities. As you think about that what is it about AI technologies or AI algorithms and applications that have an impact on storage decisions? >> Well, the characteristics of the workloads are going to be often times is going to be largely unstructured data that's going to be small files. There's going to a lot of those small files, and they're going to be randomly distributed, and as a result, that's going to change the way in which people are going to design systems to accommodate those workloads. There's going to be a lot more bandwidth. There's going to be a lot more parallelism in those systems in order to accommodate and keep those CPUs busy. And yeah, we're going to talk about but the workload characteristics are changing so the fundamental infrastructure has to change as well. >> And so our goal ultimately is to ensure that we keep these new high performing GPUs saturated by flowing data to them without a lot of spiky performance throughout the entire subsystem. We've got that right? >> Yeah, I think that's right, and that's when I was talking about parallelism, that's what you want to do. You want to be able to load up that processor especially these alternative processors like GPUs, and make sure that they stay busy. The other thing is when there's a problem, you don't want to have to restart the job. So you want to have real time error recovery, if you will. And that's been crucial in the high performance world for a long, long time on terms of, because these jobs as you know take a long, long time to the extent that you don't have to restart a job from ground zero. You can save a lot of money. >> Yeah especially as you said, as we start to integrate some of these AI applications with some of the operational applications that are actually recording your results of the work that's being performed or the prediction that's being made or the recommendation that's been offered. So I think ultimately, if we start thinking about this crucial role that AI workloads is going to have in business and that storage is going to have on AI, move more processes closer to data et cetera. That suggest that there's going to be some changes in the offering for the storage industry. What are your thinking about how storage interest is going to evolve over time? >> Well there's certainly a lot of hardware stuff that's going on. We always talk about software define but they say hardware stuff matters. If obviously flash doors changed the game from a spinning mechanical disc, and that's part of this. Also as I said the day before seeing a lot more parallelism, high bandwidth is critical. A lot of the discussion that we're having in our community is the affinity between HPC, high performance computing and big data, and I think that was pretty clear, and now that's evolving to AI. So the internal network, things like InfiniBand are pretty important. NVIDIA is coming onto the scene. So those are some of the things that we see. I think the other one is file systems. NFS tends to deal really well with unstructured data and data that is sequential. When you have all the-- >> Streaming. >> Exactly, and you have all this what we just describe as random nature and you have the need for parallelism. You really need to rethink file systems. File systems are again a lynch pan of getting the most of these AI workloads, and the others if we talk about the cloud model. You got to make this stuff simple. If we're going to bring AI and machine intelligence workloads to the enterprise, it's got to be manageable by enterprise admins. You're not going to be able to have a scientist be able to deploy this stuff, so it's got to be simple or cloud like. >> Fantastic, Dave Vellante, Wikibon. Thanks for much for being on theCUBE. >> My pleasure. >> We've had he analyst's perspective. Now tells take a look at some real numbers. Not a lot of companies has delivered a rich set of bench marks relating AI, storage and business outcomes. DDN has, let's take a video that they prepared describing the bench mark associated with these new products. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides massive acceleration for AI and DL applications. DDN has engaged extensive performance and interoperable testing programs in close collaboration with expert technology partners and customers. Performance testing has been conducted with synthetic throughputs in IOPS workloads. The results demonstrate that the DDN A3I parallel architecture delivers over 100,000 IOPS and over 10 gigabytes per second of throughput to a single DGX-1 application container. Testing with multiple container demonstrates linear scaling up to full saturation of the DGX-1 Zyo capabilities. These results show concurrent IO activity from four containers with an aggregate delivered performance of 40 gigabytes per second. The DDN A3I parallel architecture delivers true application acceleration, extensive interoperability and performance testing has been completed with a dozen popular DL frameworks on DGX-1. The results show that with the DDN A3I parallel architecture, DL applications consistently achieve a higher training throughput and faster completion times. In this example, Caffe achieves almost eight times higher training throughput on DDN A3I as well it completes over five times faster than when using a legacy file sharing architecture and protocol. Comprehensive test and results are fully documented in the DDN A3I solutions guide available from the DDN website. This test illustrates the DGX-1 GPU utilization and read activity from the AI 200 parallel storage appliance during a TensorFlow training integration. The green line shows that the DGX-1 be used to achieve maximum utilization throughout the test. The red line shows the AI200 delivers a steady stream of data to the application during the training process. In the graph below, we show the same test using a legacy file sharing architecture and protocol. The green line shows that the DGX-1 never achieves full GPU utilization and that the legacy file sharing architecture and protocol fails to sustain consistent IO performance. These results show that with DDN A3I, this DL application on the DGX-1 achieves maximum GPU product activity and completes twice as fast. This test then resolved is also documented in the DDN A3I solutions guide available from the DDN website. DDN A3I solutions within video DGX-1 brings together industry meaning compute, storage and network technologies in a fully integrated and optimized package that enables widely used DL frameworks to run faster, better and more reliably. >> You know, it's great to see real benchmarking data because this is a very important domain, and there is not a lot of benchmarking information out there around some of these other products that are available but let's try to turn that benchmarking information into business outcomes. And to do that we've got Kurt Kuckein back from DDN. Kurt, welcome back. Let's talk a bit about how are these high value outcomes That seeks with AI going to be achieved as a consequence of this new performance, faster capabilities et cetera. >> So there is a couple of considerations. The first consideration, I think, is just the selection of AI infrastructure itself. Right, we have customers telling us constantly that they don't know where to start. Now they have readily available reference architectures that tell them hey, here's something you can implement, get installed quickly, you're up and running your AI from day one. >> So the decision process for what to get is reduced. >> Exactly. >> Okay. >> Number two is, you're unlocking all ends of the investment with something like this, right. You're maximizing the performance on the GPU side, you're maximizing the performance on the ingest side for the storage. You're maximizing the throughput of the entire system. So you're really gaining the most out of your investment there. And not just gaining the most out of your investment but truly accelerating the application and that's the end goal, right, that we're looking for with customers. Plenty of people can deliver fast storage but if it doesn't impact the application and deliver faster results, cut run times down then what are you really gaining from having fast storage? And so that's where we're focused. We're focused on application acceleration. >> So simpler architecture, faster implementation based on that, integrated capabilities, ultimately, all revealing or all resulting in better application performance. >> Better application performance and in the end something that's more reliable as well. >> Kurt Kuckein, thanks so much for being on theCUBE again. So that's ends our prepared remarks. We've heard a lot of great stuff about the relationship between AI, infrastructure especially storage and business outcomes but here's your opportunity to go into crowd chat and ask your questions get your answers, share your stories, engage your peers and some of the experts that we've been talking with about this evolving relationship between these key technologies, and what it's going to mean for business. So I'm Peter Burris. Thank you very much for listening. Let's step into the crowd chat and really engage and get those key issues addressed.

Published Date : Oct 10 2018

SUMMARY :

and over the course of the next hour, It can mean the scaling of performance. in the high performance world. A lot of companies are coming to us. and let's explore some of the differences. So the biggest similarity I think is so that you can get to the data you need Keeping the GPU saturated is really the key. of the amount of capacity, and the degree of some pushing that's required to make sure on the HPC side as well, and as you posited at the beginning of the relationship between AI and storage? of the infrastructure is going to need the scale that come out of these technologies. in the repository to train experimental vehicles of technical marketing for NVIDIA in the enterprise and I think it all starts with this notion of that there is and fully tested to deliver an AI infrastructure Darrin talk to us a little bit about the nature of how to bring that IO to the GPUs on our DGX platforms. So if we think about what you describe. Absolutely and if you think about the history but a lot to learn. Makes the even easier to deploy from there, And NVIDIA has done more than the DGX-1. in the past with HPC. So roughly it's the architecture that makes things easier so that people don't have to deal with a lot of DDN helps bring that to bear, to explain the product and the offering. and easy integration of the solution Let's hear the analyst's perspective. So the technology is now available to act on that data, So a lot of people talk about the notion of data And so no all the data is not going to go into the cloud. and the key is you want to bring and the success at making some of these advance so the fundamental infrastructure has to change as well. by flowing data to them without a lot And that's been crucial in the high performance world and that storage is going to have on AI, A lot of the discussion that we're having in our community and the others if we talk about the cloud model. Thanks for much for being on theCUBE. describing the bench mark associated and read activity from the AI 200 parallel storage appliance And to do that we've got Kurt Kuckein back from DDN. is just the selection of AI infrastructure itself. and that's the end goal, right, So simpler architecture, and in the end something that's more reliable as well. and some of the experts that we've been talking

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

NVIDIAORGANIZATION

0.99+

Kurt KuckeinPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

KurtPERSON

0.99+

50QUANTITY

0.99+

200 timesQUANTITY

0.99+

DarrinPERSON

0.99+

October 11, 2018DATE

0.99+

DDNORGANIZATION

0.99+

Darrin JohnsonPERSON

0.99+

50 terabyteQUANTITY

0.99+

20 yearsQUANTITY

0.99+

10 terabyteQUANTITY

0.99+

WikibonORGANIZATION

0.99+

75%QUANTITY

0.99+

twoQUANTITY

0.99+

five megabytesQUANTITY

0.99+

TodayDATE

0.99+

second pieceQUANTITY

0.99+

third componentQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

DNNORGANIZATION

0.99+

thirdQUANTITY

0.99+

second componentQUANTITY

0.99+

90 GXsQUANTITY

0.99+

first componentQUANTITY

0.99+

todayDATE

0.99+

three minuteQUANTITY

0.99+

AI200COMMERCIAL_ITEM

0.98+

over 40 yearsQUANTITY

0.98+

first exampleQUANTITY

0.98+

DGX-1COMMERCIAL_ITEM

0.98+

100 gigabitQUANTITY

0.98+

500 petabytesQUANTITY

0.98+

V100COMMERCIAL_ITEM

0.98+

30, 40 yearsQUANTITY

0.98+

second exampleQUANTITY

0.97+

NIVIDAORGANIZATION

0.97+

over 100,000 IOPSQUANTITY

0.97+

SiliconANGLEORGANIZATION

0.97+

AI 200COMMERCIAL_ITEM

0.97+

first considerationQUANTITY

0.97+

three thingsQUANTITY

0.96+