Image Title

Search Results for John Fanelli:

John Fanelli and Maurizio Davini Dell Technologies | CUBE Conversation, October 2021


 

>>Yeah. >>Hello. Welcome to the Special Cube conversation here in Palo Alto, California. I'm John for a host of the Cube. We have a conversation around a I for the enterprise. What this means I got two great guests. John Finelli, Vice President, virtual GPU at NVIDIA and Maurizio D V D C T o University of Pisa in Italy. Uh, Practitioner, customer partner, um, got VM world coming up. A lot of action happening in the enterprise. John. Great to see you. Nice to meet you. Remotely coming in from Italy for this remote. >>John. Thanks for having us on again. >>Yeah. Nice to meet >>you. I wish we could be in person face to face, but that's coming soon. Hopefully, John, you were talking. We were just talking about before we came on camera about AI for the enterprise. And the last time I saw you in person was in Cuba interview. We were talking about some of the work you guys were doing in AI. It's gotten so much stronger and broader and the execution of an video, the success you're having set the table for us. What is the ai for the enterprise conversation frame? >>Sure. So, um, we, uh we've been working with enterprises today on how they can deliver a I or explore AI or get involved in a I, um uh, in a standard way in the way that they're used to managing and operating their data centre. Um, writing on top of you know, they're Dell servers with B M or V sphere. Um, so that AI feels like a standard workload that night organisation can deliver to their engineers and data scientists. And then the flip side of that, of course, is ensuring that engineers and data scientists get the workloads position to them or have access to them in the way that they need them. So it's no longer a trouble ticket that you have to submit to, I t and you know, count the hours or days or weeks until you you can get new hardware, right By being able to pull it into the mainstream data centre. I can enable self service provisioning for those folks. So we actually we make a I more consumable or easier to manage for I t administrators and then for the engineers and the data scientists, etcetera. We make it easy for them to get access to those resources so they can get to their work right away. >>Quite progress in the past two years. Congratulations on that and looking. It's only the beginning is Day one Mercy. I want to ask you about what's going on as the CTO University piece of what's happening down there. Tell us a little bit about what's going on. You have the centre of excellence there. What does that mean? What does that include? >>Uh, you know, uh, University of Peace. Are you one of one of the biggest and oldest in Italy? Uh, if you have to give you some numbers is around 50 K students and 3000 staff between, uh, professors resurgence and that cabinet receive staff. So I we are looking into data operation of the centres and especially supports for scientific computing. And, uh, this is our our daily work. Let's say this, uh, taking us a lot of times, but, you know, we are able to, uh, reserve a merchant percentage of our time, Uh, for r and D, And this is where the centre of excellence is, Uh, is coming out. Uh, so we are always looking into new kinds of technologies that we can put together to build new solutions to do next generation computing gas. We always say we are looking for the right partners to do things together. And at the end of the day is the work that is good for us is good for our partners and typically, uh, ends in a production system for our university. So is the evolution of the scientific computing environment that we have. >>Yeah. And you guys have a great track record and reputation of, you know, R and D, testing software, hardware combinations and sharing those best practises, you know, with covid impact in the world. Certainly we see it on the supply chain side. Uh, and John, we heard Jensen, your CEO and video talk multiple keynotes. Now about software, uh, and video being a software company. Dell, you mentioned Dale and VM Ware. You know, Covid has brought this virtualisation world back. And now hybrid. Those are words that we used basically in the text industry. Now it's you're hearing hybrid and virtualisation kicked around in real world. So it's ironic that vm ware and El, uh, and the Cube eventually all of us together doing more virtual stuff. So with covid impacting the world, how does that change you guys? Because software is more important. You gotta leverage the hardware you got, Whether it's Dell or in the cloud, this is a huge change. >>Yeah. So, uh, as you mentioned organisations and enterprises, you know, they're looking at things differently now, Um, you know, the idea of hybrid. You know, when you talk to tech folks and we think about hybrid, we always think about you know, how the different technology works. Um, what we're hearing from customers is hybrid, you know, effectively translates into, you know, two days in the office, three days remote, you know, in the future when they actually start going back to the office. So hybrid work is actually driving the need for hybrid I t. Or or the ability to share resources more effectively. Um, And to think about having resources wherever you are, whether you're working from home or you're in the office that day, you need to have access to the same resources. And that's where you know the the ability to virtualize those resources and provide that access makes that hybrid part seamless >>mercy What's your world has really changed. You have students and faculty. You know, Things used to be easy in the old days. Physical in this network. That network now virtual there. You must really be having him having impact. >>Yeah, we have. We have. Of course. As you can imagine, a big impact, Uh, in any kind of the i t offering, uh, from, uh, design new networking technologies, deploying new networking technologies, uh, new kind of operation we find. We found it at them. We were not able anymore to do burr metal operations directly, but, uh, from the i t point of view, uh, we were how can I say prepared in the sense that, uh, we ran from three or four years parallel, uh, environment. We have bare metal and virtual. So as you can imagine, traditional bare metal HPC cluster D g d g X machines, uh, multi GPU s and so on. But in parallel, we have developed, uh, visual environment that at the beginning was, as you can imagine, used, uh, for traditional enterprise application, or VD. I, uh, we have a significant significant arise on a farm with the grid for remote desktop remote pull station that we are using for, for example, uh, developing a virtual classroom or visual go stations. And so this is was typical the typical operation that we did the individual world. But in the same infrastructure, we were able to develop first HPC individual borders of utilisation of the HPC resources for our researchers and, uh, at the end, ai ai offering and ai, uh, software for our for our researchers, you can imagine our vehicle infrastructure as a sort of white board where we are able to design new solution, uh, in a fast way without losing too much performance. And in the case of the AI, we will see that we the performance are almost the same at the bare metal. But with all the flexibility that we needed in the covid 19 world and in the future world, too. >>So a couple things that I want to get John's thoughts as well performance you mentioned you mentioned hybrid virtual. How does VM Ware and NVIDIA fit into all this as you put this together, okay, because you bring up performance. That's now table stakes. He's leading scale and performance are really on the table. everyone's looking at it. How does VM ware an NVIDIA John fit in with the university's work? >>Sure. So, um, I think you're right when it comes to, uh, you know, enterprises or mainstream enterprises beginning their initial foray into into a I, um there are, of course, as performance in scale and also kind of ease of use and familiarity are all kind of things that come into play in terms of when an enterprise starts to think about it. And, um, we have a history with VM Ware working on this technology. So in 2019, we introduced our virtual compute server with VM Ware, which allowed us to effectively virtual is the Cuda Compute driver at last year's VM World in 2020 the CEOs of both companies got together and made an announcement that we were going to bring a I R entire video AI platform to the Enterprise on top of the sphere. And we did that, Um, starting in March this year, we we we finalise that with the introduction of GM wears V, Sphere seven, update two and the early access at the time of NVIDIA ai Enterprise. And, um, we have now gone to production with both of those products. And so customers, Um, like the University of Pisa are now using our production capabilities. And, um, whenever you virtualize in particular and in something like a I where performances is really important. Um, the first question that comes up is, uh doesn't work and And how quickly does it work Or or, you know, from an I t audience? A lot of times you get the How much did it slow down? And and and so we We've worked really closely from an NVIDIA software perspective and a bm wear perspective. And we really talk about in media enterprise with these fair seven as optimist, certified and supported. And the net of that is, we've been able to run the standard industry benchmarks for single node as well as multi note performance, with about maybe potentially a 2% degradation in performance, depending on the workload. Of course, it's very different, but but effectively being able to trade that performance for the accessibility, the ease of use, um, and even using things like we realise, automation for self service for the data scientists, Um and so that's kind of how we've been pulling it together for the market. >>Great stuff. Well, I got to ask you. I mean, people have that reaction of about the performance. I think you're being polite. Um, around how you said that shows the expectation. It's kind of sceptical, uh, and so I got to ask you, the impact of this is pretty significant. What is it now that customers can do that? They couldn't or couldn't feel they had before? Because if the expectations as well as it worked well, I mean, there's a fast means. It works, but like performance is always concerned. What's different now? What what's the bottom line impact on what country do now that they couldn't do before. >>So the bottom line impact is that AI is now accessible for the enterprise across there. Called their mainstream data centre, enterprises typically use consistent building blocks like the Dell VX rail products, right where they have to use servers that are common standard across the data centre. And now, with NVIDIA Enterprise and B M R V sphere, they're able to manage their AI in the same way that they're used to managing their data centre today. So there's no retraining. There's no separate clusters. There isn't like a shadow I t. So this really allows an enterprise to efficiently deploy um, and cost effectively Deploy it, uh, it without because there's no performance degradation without compromising what their their their data scientists and researchers are looking for. And then the flip side is for the data science and researcher, um, using some of the self service automation that I spoke about earlier, they're able to get a virtual machine today that maybe as a half a GPU as their models grow, they do more exploring. They might get a full GPU or or to GPS in a virtual machine. And their environment doesn't change because it's all connected to the back end storage. And so for the for the developer and the researcher, um, it makes it seamless. So it's really kind of a win for both Nike and for the user. And again, University of Pisa is doing some amazing things in terms of the workloads that they're doing, Um, and, uh and, uh, and are validating that performance. >>Weigh in on this. Share your opinion on or your reaction to that, What you can do now that you couldn't do before. Could you share your experience? >>Our experience is, uh, of course, if you if you go to your, uh, data scientists or researchers, the idea of, uh, sacrificing four months to flexibility at the beginning is not so well accepted. It's okay for, uh, for the Eid management, As John was saying, you have people that is know how to deal with the virtual infrastructure, so nothing changed for them. But at the end of the day, we were able to, uh, uh, test with our data. Scientists are researchers veteran The performance of us almost similar around really 95% of the performance for the internal developer developer to our work clothes. So we are not dealing with benchmarks. We have some, uh, work clothes that are internally developed and apply to healthcare music generator or some other strange project that we have inside and were able to show that the performance on the beautiful and their metal world were almost the same. We, the addition that individual world, you are much more flexible. You are able to reconfigure every finger very fast. You are able to design solution for your researcher, uh, in a more flexible way. An effective way we are. We were able to use the latest technologies from Dell Technologies and Vidia. You can imagine from the latest power edge the latest cuts from NVIDIA. The latest network cards from NVIDIA, like the blue Field to the latest, uh, switches to set up an infrastructure that at the end of the day is our winning platform for our that aside, >>a great collaboration. Congratulations. Exciting. Um, get the latest and greatest and and get the new benchmarks out their new playbooks. New best practises. I do have to ask you marriage, if you don't mind me asking why Look at virtualizing ai workloads. What's the motivation? Why did you look at virtualizing ai work clothes? >>Oh, for the sake of flexibility Because, you know, uh, in the latest couple of years, the ai resources are never enough. So we are. If you go after the bare metal, uh, installation, you are going into, uh, a world that is developing very fastly. But of course, you can afford all the bare metal, uh, infrastructure that your data scientists are asking for. So, uh, we decided to integrate our view. Dual infrastructure with AI, uh, resources in order to be able to, uh, use in different ways in a more flexible way. Of course. Uh, we have a We have a two parallels world. We still have a bare metal infrastructure. We are growing the bare metal infrastructure. But at the same time, we are growing our vehicle infrastructure because it's flexible, because we because our our stuff, people are happy about how the platform behaviour and they know how to deal them so they don't have to learn anything new. So it's a sort of comfort zone for everybody. >>I mean, no one ever got hurt virtualizing things that makes it makes things go better faster building on on that workloads. John, I gotta ask you, you're on the end video side. You You see this real up close than video? Why do people look at virtualizing ai workloads is the unification benefit. I mean, ai implies a lot of things, implies you have access to data. It implies that silos don't exist. I mean, that doesn't mean that's hard. I mean, is this real people actually looking at this? How is it working? >>Yeah. So? So again, um you know for all the benefits and activity today AI brings a I can be pretty complex, right? It's complex software to set up and to manage. And, um, within the day I enterprise, we're really focusing in on ensuring that it's easier for organisations to use. For example Um, you know, I mentioned you know, we we had introduced a virtual compute server bcs, um uh, two years ago and and that that has seen some some really interesting adoption. Some, uh, enterprise use cases. But what we found is that at the driver level, um, it still wasn't accessible for the majority of enterprises. And so what we've done is we've built upon that with NVIDIA Enterprise and we're bringing in pre built containers that remove some of the complexities. You know, AI has a lot of open source components and trying to ensure that all the open source dependencies are resolved so you can get the AI developers and researchers and data scientists. Actually doing their work can be complex. And so what we've done is we've brought these pre built containers that allow you to do everything from your initial data preparation data science, using things like video rapids, um, to do your training, using pytorch and tensorflow to optimise those models using tensor rt and then to deploy them using what we call in video Triton Server Inference in server. Really helping that ai loop become accessible, that ai workflow as something that an enterprise can manage as part of their common core infrastructure >>having the performance and the tools available? It's just a huge godsend people love. That only makes them more productive and again scales of existing stuff. Okay, great stuff. Great insight. I have to ask, What's next one's collaboration? This is one of those better together situations. It's working. Um, Mauricio, what's next for your collaboration with Dell VM Ware and video? >>We will not be for sure. We will not stop here. Uh, we are just starting working on new things, looking for new development, uh, looking for the next beast. Come, uh, you know, the digital world is something that is moving very fast. Uh, and we are We will not We will not stop here because because they, um the outcome of this work has been a very big for for our research group. And what John was saying This the fact that all the software stock for AI are simplified is something that has been, uh, accepted. Very well, of course you can imagine researching is developing new things. But for people that needs, uh, integrated workflow. The work that NVIDIA has done in the development of software package in developing containers, that gives the end user, uh, the capabilities of running their workloads is really something that some years ago it was unbelievable. Now, everything is really is really easy to manage. >>John mentioned open source, obviously a big part of this. What are you going to? Quick, Quick follow if you don't mind. Are you going to share your results so people can can look at this so they can have an easier path to AI? >>Oh, yes, of course. All the all the work, The work that is done at an ideal level from University of Visa is here to be shared. So we we as, uh, as much as we have time to write down we are. We are trying to find a way to share the results of the work that we're doing with our partner, Dell and NVIDIA. So for sure will be shared >>well, except we'll get that link in the comments, John, your thoughts. Final thoughts on the on the on the collaboration, uh, with the University of Pisa and Delvian, where in the video is is all go next? >>Sure. So So with University of Pisa, We're you know, we're absolutely, uh, you know, grateful to Morocco and his team for the work they're doing and the feedback they're sharing with us. Um, we're learning a lot from them in terms of things we can do better and things that we can add to the product. So that's a fantastic collaboration. Um, I believe that Mauricio has a session at the M World. So if you want to actually learn about some of the workloads, um, you know, they're doing, like, music generation. They're doing, you know, covid 19 research. They're doing deep, multi level, uh, deep learning training. So there's some really interesting work there, and so we want to continue that partnership. University of Pisa, um, again, across all four of us, uh, university, NVIDIA, Dell and VM Ware. And then on the tech side, you know, for our enterprise customers, um, you know, one of the things that we actually didn't speak much about was, um I mentioned that the product is optimised certified and supported, and I think that support cannot be understated. Right? So as enterprises start to move into these new areas, they want to know that they can pick up the phone and call in video or VM ware. Adele, and they're going to get support for these new workloads as they're running them. Um, we were also continuing, uh, you know, to to think about we spent a lot of time today on, like, the developer side of things and developing ai. But the flip side of that, of course, is that when those ai apps are available or ai enhanced apps, right, Pretty much every enterprise app today is adding a I capabilities all of our partners in the enterprise software space and so you can think of a beady eye enterprises having a runtime component so that as you deploy your applications into the data centre, they're going to be automatically take advantage of the GPS that you have there. And so we're seeing this, uh, future as you're talking about the collaboration going forward, where the standard data centre building block still maintains and is going to be something like a VX rail two U server. But instead of just being CPU storage and RAM, they're all going to go with CPU, GPU, storage and RAM. And that's going to be the norm. And every enterprise application is going to be infused with AI and be able to take advantage of GPS in that scenario. >>Great stuff, ai for the enterprise. This is a great QB conversation. Just the beginning. We'll be having more of these virtualizing ai workloads is real impacts data scientists impacts that compute the edge, all aspects of the new environment we're all living in. John. Great to see you, Maurizio here to meet you and all the way in Italy looking for the meeting in person and good luck in your session. I just got a note here on the session. It's at VM World. Uh, it's session 22 63 I believe, um And so if anyone's watching, Want to check that out? Um, love to hear more. Thanks for coming on. Appreciate it. >>Thanks for having us. Thanks to >>its acute conversation. I'm John for your host. Thanks for watching. We'll talk to you soon. Yeah,

Published Date : Oct 5 2021

SUMMARY :

I'm John for a host of the Cube. And the last time I saw you in person was in Cuba interview. of course, is ensuring that engineers and data scientists get the workloads position to them You have the centre of excellence there. of the scientific computing environment that we have. You gotta leverage the hardware you got, actually driving the need for hybrid I t. Or or the ability to Physical in this network. And in the case of the AI, we will see that we So a couple things that I want to get John's thoughts as well performance you mentioned the ease of use, um, and even using things like we realise, automation for self I mean, people have that reaction of about the performance. And so for the for the developer and the researcher, What you can do now that you couldn't do before. The latest network cards from NVIDIA, like the blue Field to the I do have to ask you marriage, if you don't mind me asking why Look at virtualizing ai workloads. Oh, for the sake of flexibility Because, you know, uh, I mean, ai implies a lot of things, implies you have access to data. And so what we've done is we've brought these pre built containers that allow you to do having the performance and the tools available? that gives the end user, uh, Are you going to share your results so people can can look at this so they can have share the results of the work that we're doing with our partner, Dell and NVIDIA. the collaboration, uh, with the University of Pisa and Delvian, all of our partners in the enterprise software space and so you can think of a beady eye enterprises scientists impacts that compute the edge, all aspects of the new environment Thanks to We'll talk to you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

NVIDIAORGANIZATION

0.99+

University of VisaORGANIZATION

0.99+

MaurizioPERSON

0.99+

MauricioPERSON

0.99+

October 2021DATE

0.99+

DellORGANIZATION

0.99+

ItalyLOCATION

0.99+

John FinelliPERSON

0.99+

2019DATE

0.99+

John FanelliPERSON

0.99+

AdelePERSON

0.99+

2020DATE

0.99+

University of PisaORGANIZATION

0.99+

threeQUANTITY

0.99+

2%QUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

CubaLOCATION

0.99+

VidiaORGANIZATION

0.99+

CTO UniversityORGANIZATION

0.99+

two daysQUANTITY

0.99+

three daysQUANTITY

0.99+

NikeORGANIZATION

0.99+

March this yearDATE

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

VM WareTITLE

0.99+

four yearsQUANTITY

0.99+

both companiesQUANTITY

0.99+

3000 staffQUANTITY

0.99+

last yearDATE

0.99+

two years agoDATE

0.98+

Maurizio DaviniPERSON

0.98+

VM WareORGANIZATION

0.98+

todayDATE

0.97+

VXCOMMERCIAL_ITEM

0.97+

GMORGANIZATION

0.97+

four monthsQUANTITY

0.97+

VM wareTITLE

0.96+

two great guestsQUANTITY

0.96+

oneQUANTITY

0.95+

22 63OTHER

0.95+

DaleORGANIZATION

0.95+

M WorldORGANIZATION

0.95+

two parallelsQUANTITY

0.95+

fourQUANTITY

0.94+

around 50 K studentsQUANTITY

0.93+

JensenPERSON

0.93+

University of PeaceORGANIZATION

0.91+

firstQUANTITY

0.91+

95%QUANTITY

0.9+

VM WorldORGANIZATION

0.89+

VM WorldEVENT

0.89+

WareORGANIZATION

0.86+

John Fanelli, NVIDIA & Kevin Gray, Dell EMC | VMworld 2019


 

(lively music) >> Narrator: Live, from San Francisco, celebrating 10 years of high tech coverage, it's theCUBE, covering VMworld 2019! Brought to you by VMware and its ecosystem partners. >> Okay, welcome back to theCUBE's live coverage in VMworld 2019. We're in San Francisco. We're in Moscone North Lobby. I'm John Frer, my co Stu Miniman, here covering all the action of VMworld, two sets for theCUBE, our tenth year, Stu. Keeping it going. Two great guests, John Fanelli, CUBE Alumni, Vice President of Product, Virtual GPUs at NVIDIA Kevin Gray, Director of Product Marketing, Dell EMC. Thanks for coming back on. Good to see you. >> Awesome. >> Good to see you guys, too. >> NVIDIA, big news, we saw your CEO up on the keynote videoing in. Two big announcements. You got some stats on some Windows stats to talk about. Let's talk about the news first, get the news out of the way. >> Sure, at this show, NVIDIA announced our new product called NVIDIA Virtual Compute Server. So for the very first time anywhere, we're able to virtualize artificial intelligence, deep learning, machine learning, and data analytics. Of course, we did that in conjunction with our partner, VMware. This runs on top of vSphere and also in conjunction with our partner at Dell. All of this Virtual Compute Server runs on Dell VxRail, as well. >> What's the impact going to be for that? What does that mean for the customers? >> For customers, it's really going to be the on-ramp for Enterprise AI. A lot of customers, let's say they have a team of maybe eight data scientists are doing data analytics, if they want to move through GPU today, they have to buy eight GPUs. However, with our new solution, maybe they start with two GPUs and put four users on a GPU. Then as their models get bigger and their data gets bigger, they move to one user per GPU. Then ultimately, because we support multiple GPUs now as part of this, they move to a VM that has maybe four GPUs in it. We allow the enterprise to start to move on to AI and deep learning, in particular, machine learning for data analytics very easily. >> GPUs are in high demand. My son always wants the next NVIDIA, in part told me to get some GPUs from you when you came on. Ask the NVIDIA guy to get some for his gaming rig. Kidding aside, now in the enterprise, really important around some of the data crunching, this has really been a great use case. Talk about how that's changed, how people think about it, and how it's impacted traditional enterprise. >> From a data analytics perspective, the data scientists will ingest data, they'll run some machine learning on it, they'll create an inference model that they run to drive predictive business decisions. What we've done is we've GPU-accelerated the key libraries, the technologies, like PyTorch, XGBoost to use a GPU. The first announcement is about how they can now use Virtual Compute Server to do that. The second announcement is that workflow is, as I mentioned, they'll start small, and then they'll do bigger models, and eventually they want to train that scale. So what they want to do is they want to move to the cloud so they can have hundreds or thousands of GPUs. The second announcement is that NVIDIA and VMware are bringing Virtual Compute Server to VMware Cloud running on AWS with our T4 GPUs. So now I can scale virtually starting with fractional GPU to single GPU to multi GPU, and push a button with HCX and move it directly into AWS T4 accelerated cloud. >> That's the roadmap so you can get in, get the work done, scale up, that's the benefit of that. Availability, timing, when all of this is going to hit in-- >> So Virtual Compute Server is available on Friday, the 29th. We're looking at mid next year for the full suite of VMware Cloud on top of Aws T4. >> Kevin, you guys are supplier here at Dell EMC. What's the positioning there with you guys? >> We're working very closely with NVIDIA in general on all of their efforts around both AI as well as VDI too. We'll work quite a bit, most recently on the VDI front as well. We look to drive things like qualifying the devices. There's both VDI or analytics applications. >> Kevin, bring us up-to-date 'cause it's funny we were talking about this is our 10th year here at the show. I remember sitting across Howard Street here in 2010 and Dell, and HP, and IBM all claiming who had the lowest dollar per desktop as to what they were doing in VDI. It's a way different discussion here in 2019. >> Absolutely. Go ahead. >> One of the things that we've learned with NVIDIA is that it's really about the user experience. It's funny we're at a transition point now from Windows 7 to Windows 10. The last transition was Windows XP to Windows 7. What we did then is we took Windows 7, we tore everything out of it we possibly could, we made it look like XP, and we shoved it out. 10 years later, that doesn't work. Everyone's got their iPhones, their iOS devices, their Android devices. Microsoft's done a great job on Windows 10 being immersive. Now we're focused on user experience. When the VDI environment, as you move to Windows 10, you may not be aware of this, but from Windows 7 to Windows 10, it uses 50% more CPU, and you don't even get that great of a user experience. You pop a GPU in there, and you're good. Most of our customers together are working on a five-year life cycle. That means over the next five years, they're going to get 10 updates of Windows 10, and they're going to get like 60 updates of their Office applications. That means that they want to be future-proof now by putting the GPUs in to guarantee a great user experience. >> On the performance side too, obviously. In auto updates, this is the push notification world we live in. This has to built in from day one. >> Absolutely, and if you look at what Dell's doing, we really built this into both our VxRails and our VxBlocks. GPUs are just now part of it. We do these fully qualified. It stacks specifically for VDI environments as well. We're working a lot with the n-vector tools from VM which makes sure we're-- >> VDI finally made it! >> qualifying user experience. >> All these years. >> Yes, yes. In fact, we have this user experience tool called n-vector, which actually, without getting super technical for the audience, it allows you to look at the user experience based on frame-rate, latency, and image quality. We put this tool together, but Dell has really been taking a lead on testing it and promoting it to the users to really drive the cost-effectiveness. It still is about the dollar per desktop, but it's the dollar per dazzling desktop. (laughing) >> Kevin, I hear the frame-rate in there, and I've got all the remote workers, and you're saying how do I make sure that's not the gaming platform they're using because I know how important that is. >> Absolutely. There's a ton of customers that are out there that we're using. We look at folks like Guillevin as like the example of a company that's worked with us and NVIDIA to truly drive types of applications that are essential to VDI. These types of power workers doing applications like Autodesk, that user experience and that ability to support multiple users. If you look at Pat, he talked a little bit about any cloud, any application, any device. In VDI, that's really what it's about, allowing those workers to come together. >> I think the thing that the two of you mentioned, and Stu you pointed out brilliantly was that VDI is not just an IT thing anymore. It really is the expectation now that my rig, if I'm a gamer, or a young person, the younger kids, if you're under 25, if you don't have a kick-ass rig, (laughs) that's what they call it. Multiple monitors, that's the expectation, again, mobility. Work experience, workspace. >> Exactly, along those same lines, by the way. >> This is the whole category. It's not just like a VDI, this thing over here that used to be talked about as an IT thing. >> It's about the workflow. So it's how do I get my job done. We used to use words like "business worker" and "knowledge worker." It's just I'm a worker. Everybody today uses their phone that's mobile. They use their computer at home, they use their computer at work. They're all running with dual monitors. Dual monitors, sometimes dual 4K monitors. That really benefits as well from having a GPU. I know we're on TV so hopefully some of you guys are watching VDI on your GPU-accelerated. It's things like Skype, WebEX, Zoom, all the collaboration to 'em, Microsoft Teams, they all benefit from our joint solution, like the GPU. >> These new subsystems like GPUs become so critical. They're not just subsystem, they are the main part because the offload is now part of the new operating environment. >> We optimized together jointly using the n-vector tool. We optimized the server and operating environment, so that if you run into GPU, you can right-size your CPU in terms of cores, speed, etc., so that you get the best user experience at a most cost effective way. >> Also, the gaming world helps bring in the new kind of cool visualization. That's going to move into just the workflow of workers. You start to see this immersive experience, VR, ARs obviously around the corner. It's only going to get more complex, more needs for GPUs. >> Yes, in fact, we're seeing more, I think, requirements for AR and VR from business than we are actually for gaming. Don't you want to go into your auto showroom at your house and feel the fine Corinthian leather? >> We got to upgrade our CUBE game, get more GPU focused and get some tracing in there. >> Kevin, I know I've seen things from the Dell family on levering VR in the enterprise space. >> Oh, absolutely. If you look at a lot of the things that we're doing with some of the telcos around 5G. They're very interested in VR and AR. Those are areas that'll continue to use things like GPUs to help accelerate those types of applications. It really does come down to having that scalable infrastructure that's easy to manage and easy to operate. That's where I think the partnership with NVIDIA really comes together. >> Deep learning and all this stuff around data. Michael Dell always comes on theCUBE, talks about it. He sees data as the biggest opportunity and challenge. In whatever applications coming in, you got to be able to pound into that data. That's where AI's really shown... Machine learning has kind of shown that that's helping heavy lifting a lot of things that were either manual. >> Exactly. The one thing that's really great about data analytics that are GPU-accelerated is we can take a job that used to take days and bring it down to hours. Obviously, doing something faster is great, but if I take a job that used to take a week and I can do it in one day, that means I have four more days to do other things. It's almost like I'm hiring people for free because I get four more extra work days. The other thing that's really interesting as our joint solution is you can leverage that same virtual GPU technology. You can do VDI by day and at night, you run Compute. So when your users aren't at work, you migrate them off, you spin up your VMs that are doing your data analytics using our RAPIDS technology, and then you're able to get that platform running 24 by seven. >> Productivity gains just from an infrastructure. Even the user too, up and down, the productivity gains are significant. So I'll get three monitors now. I'm going to get one of those Alienware curved monitors. >> Just the difference we had, we have a suite here at the show, and just the difference, you can see such a difference when you insert the GPUs into the platform. It's just makes all the difference. >> John, I got to ask you a personal question. How many times have people asked you for a GPU? You must get that all the time? >> We do. I have a NVIDIA backpack. When I walk around, there's a lot of people that only know NVIDIA for games. So random people will always ask for that. >> I've got two sons and two daughters and they just nerd out on the GPUs. >> I think he's trying to get me to commit on camera on giving him a GPU. (laughing) I think I'm in trouble here. >> Yeah, they get the latest and greatest. Any new stuff, they're going to be happy to be the first on the block to get the GPU. It's certainly impacted on the infrastructure side, the components, the operating environment, Windows 10. Any other data you guys have to share that you think is notable around how all this is coming together working from user experience around Windows and VDI? >> I think one piece of data, again, going back to your first comment about cost per desktop. We're seeing a lot of migration to Windows 10. Customers are buying our joint solution from Dell which includes our hardware and software. They're buying that five-year life cycle, so we actually put a program in place to really drive down the cost. It's literally like $3 per month to have a GPU-accelerated virtual desktop. It's really great Value for the customers besides the great productivity. >> If you look at doing some of these workloads on premises, some of the costs can come down. We had a recent study around the VxBlock as an example. We showed that running GPUs and VDI can be up as much as 45% less on a VxBlock at scale. When you talk about the whole hybrid cloud, multi-cloud strategy, there's pluses and minuses to both. Certainly, if we look at some of the ability to start small and scale out, whether you're going HCI or you're going CI, I think there's a VDI solution there that can really drive the economics. >> The intense workloads. Is there any industries that are key for you guys in terms of verticals? >> Absolutely. So we're definitely looking at a lot of the CAD/CAM industries. We just did a certification on our platforms with Dassault's CATIA system. That's an area that we'll continue to explore as we move forward. >> I think in the workstation side of things, it's all the standard, it's automotive, it's manufacturing. Architecture is interesting. Architecture is one of those companies that has kind of an S and B profile. They have lots of offices, but they have enterprise requirements for all the hard work that they do. Then with VDI, we're very strong in financial services as well as healthcare. In fact, if you haven't seen, you should come by. We have a Bloomberg demo for financial services about the impact for traders. I have a virtualized GPU desktop. >> The speed is critical for them. Final question. Take-aways from the show this year, 2019 VMworld, Stu, we got 10 years to look back, but guys, take-aways from the show that you're going to take back from this week. >> I think there's still a lot of interest and enthusiasm. Surprisingly, there's still a lot of customers that haven't finished there migration to Windows 10 and they're coming to us saying, Oh my gosh, I only have until January, what can you do to help me? (laughing) >> Get some GPUs. Thoughts from the show. >> The multi-cloud world continues to evolve, the continued partnerships that emerge as part of this is just pretty amazing in how that's changing in things like virtual GPUs and accelerators. That experience that people have come to expect from the cloud is something, for me is a take-away. >> John Fanelli, NVIDIA, thanks for coming on. Congratulations on all the success. Kevin, Dell EMC, thanks for coming on. >> Thank you. >> Thanks for the insights. Here on theCUBE, Vmworld 2019. John Furrier, Stu Miniman, stay with us for more live coverage after this short break. (lively music)

Published Date : Aug 28 2019

SUMMARY :

Brought to you by VMware and its ecosystem partners. here covering all the action of VMworld, on the keynote videoing in. So for the very first time anywhere, We allow the enterprise Ask the NVIDIA guy to get some for his gaming rig. that they run to drive predictive business decisions. That's the roadmap so you can get in, on Friday, the 29th. What's the positioning there with you guys? most recently on the VDI front as well. the lowest dollar per desktop Absolutely. by putting the GPUs in to guarantee a great user experience. On the performance side too, obviously. Absolutely, and if you look at what Dell's doing, for the audience, it allows you to look and I've got all the remote workers, and that ability to support multiple users. It really is the expectation now that my rig, This is the whole category. all the collaboration to 'em, Microsoft Teams, of the new operating environment. We optimized the server and operating environment, bring in the new kind of cool visualization. and feel the fine Corinthian leather? We got to upgrade our CUBE game, on levering VR in the enterprise space. that scalable infrastructure that's easy to manage He sees data as the biggest opportunity and challenge. and at night, you run Compute. Even the user too, up and down, and just the difference, you can see such a difference You must get that all the time? that only know NVIDIA for games. and they just nerd out on the GPUs. (laughing) I think I'm in trouble here. It's certainly impacted on the infrastructure side, It's really great Value for the customers that can really drive the economics. Is there any industries that are key for you guys of the CAD/CAM industries. for all the hard work that they do. Take-aways from the show this year, that haven't finished there migration to Windows 10 Thoughts from the show. That experience that people have come to expect Congratulations on all the success. Thanks for the insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
$3QUANTITY

0.99+

Michael DellPERSON

0.99+

DellORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

IBMORGANIZATION

0.99+

2019DATE

0.99+

Stu MinimanPERSON

0.99+

John FanelliPERSON

0.99+

JohnPERSON

0.99+

John FrerPERSON

0.99+

KevinPERSON

0.99+

HPORGANIZATION

0.99+

2010DATE

0.99+

San FranciscoLOCATION

0.99+

10 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

five-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

60 updatesQUANTITY

0.99+

Kevin GrayPERSON

0.99+

two daughtersQUANTITY

0.99+

John FurrierPERSON

0.99+

45%QUANTITY

0.99+

twoQUANTITY

0.99+

Windows 7TITLE

0.99+

VMwareORGANIZATION

0.99+

Windows 10TITLE

0.99+

one dayQUANTITY

0.99+

SkypeORGANIZATION

0.99+

Howard StreetLOCATION

0.99+

mid next yearDATE

0.99+

AWSORGANIZATION

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

tenth yearQUANTITY

0.99+

two GPUsQUANTITY

0.99+

bothQUANTITY

0.99+

Windows XPTITLE

0.99+

four usersQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

second announcementQUANTITY

0.99+

oneQUANTITY

0.99+

a weekQUANTITY

0.99+

firstQUANTITY

0.99+

10th yearQUANTITY

0.98+

one pieceQUANTITY

0.98+

one userQUANTITY

0.98+

WindowsTITLE

0.98+

this yearDATE

0.98+

PatPERSON

0.98+

DassaultORGANIZATION

0.98+

this weekDATE

0.98+

thousandsQUANTITY

0.98+

eight data scientistsQUANTITY

0.98+

first announcementQUANTITY

0.98+

XPTITLE

0.98+

10 years laterDATE

0.98+

StuPERSON

0.98+

first timeQUANTITY

0.98+