Image Title

Search Results for Pisa:

Maurizio Davini, University of Pisa and Kaushik Ghosh, Dell Technologies | CUBE Conversation 2021


 

>>Hi, Lisa Martin here with the cube. You're watching our coverage of Dell technologies world. The digital virtual experience. I've got two guests with me here today. We're going to be talking about the university of Piza and how it is leaning into all flash data lakes powered by Dell technologies. One of our alumni is back MERITO, Debbie, and the CTO of the university of PISA. Maricio welcome back to the cube. Thank you. Very excited to talk to you today. CAUTI Gosha is here as well. The director of product management at Dell technologies. Kaushik. Welcome to the cube. Thank you. So here we are at this virtual event again, Maricio you were last on the cube at VMworld a few months ago, the virtual experience as well, but talk to her audience a little bit before we dig into the technology and some of these demanding workloads that the university is utilizing. Talk to me a little bit about your role as CTO and about the university. >>So my role as CTO at university of PISA is, uh, uh, regarding the, uh, data center operations and, uh, scientific computing support for these, the main, uh, occupation that, uh, that, uh, yeah. Then they support the world, saw the technological choices that university of PISA is, uh, is doing, uh, during the latest, uh, two or three years. >>Talk to me about some, so this is a, in terms of students we're talking about 50,000 or so students 3000 faculty and the campus is distributed around the town of PISA, is that correct? Maricio >>Uh, the university of PISA is sort of a, uh, town campus in the sense that we have 20 departments that are, uh, located inside the immediate eval town, uh, but due to the choices, but university of peace, I S uh, the, uh, last, uh, uh, nineties, uh, we are, uh, owner of, uh, of a private fiber network connecting all our, uh, departments and allow the templates. And so we can use the town as a sort of white board to design, uh, uh, new services, a new kind of support for teaching. Uh, and, uh, and so, >>So you've really modernized the data infrastructure for the university that was founded in the middle ages. Talk to me now about some of the workloads and that are generating massive amounts of data, and then we'll get into what you're doing with Dell technologies. >>Oh, so the university of PISA as a, uh, quite old on HPC, traditional HPC. So we S we are supporting, uh, uh, the traditional workloads from, uh, um, CAE or engineering or chemistry or oil and gas simulations. Uh, of course it during, uh, uh, the pandemic year, last year, especially, uh, we have new, uh, kind of work you'll scan, uh, summer related, uh, to the, uh, fast movement of the HPC workload from let's say, traditional HPC to AI and machine learning. And those are the, um, request that you support a lot of remote activities coming from, uh, uh, uh, distance learning, uh, to remote ties, uh, uh, laboratories or stations or whatever, most elder in presence in the past. And so the impact either on the infrastructure or, and the specialty and the storage part was a significant. >>So you talked about utilizing the high performance computing environments for awhile and for scientific computing and things. I saw a case study that you guys have done with Dell, but then during the pandemic, the challenge and the use case of remote learning brought additional challenges to your environment from that perspective, how, how were you able to transfer your curriculum to online and enable the scientists, the physicists that oil and gas folks doing research to still access that data at the speed that they needed to, >>Uh, you know, for what you got, uh, uh, uh, distance learning? Of course. So we were, uh, based on the cloud services were not provided internally by Yas. So we lie, we based on Microsoft services, so Google services and so on, but what regards, uh, internal support, uh, scientific computing was completely, uh, remote dies either on support or experience, uh, because, uh, I can, uh, I, can I, uh, bring some, uh, some examples, uh, for example, um, laboratory activities, uh, we are, the access to the laboratories, uh, was the of them, uh, as much as possible. Uh, we design a special networker to connect all the and to give the researcher the possibility of accessing the data on visit special network. So as sort of a collector of data, uh, inside our, our university network, uh, you can imagine that the, uh, for example, was, was a key factor for us because utilization was, uh, uh, for us, uh, and flexible way to deliver new services, uh, in an easy way, uh, especially if you have to, uh, have systems for remote. So, as, as I told you before about the, uh, network, as well as a white board, but also the computer infrastructure, it was VM-ware visualization and treated as a, as a sort of what we were designing with services either, either for interactive services or especially for, uh, scientific computing. For example, we have an experience with it and a good polarization of HPC workload. We start agents >>Talk to me about the storage impact, because as we know, we talk about, you know, these very demanding, unstructured workloads, AI machine learning, and that can be, those are difficult for most storage systems to handle the radio. Talk to us about why you leaned into all flash with Dell technologies and talk to us a little bit about the technologies that you've implemented. >>So, uh, if I, if I have to think about our, our storage infrastructure before the pandemic, I have to think about Iceland because our HPC workloads Moss, uh, mainly based off, uh, Isilon, uh, as a storage infrastructure, uh, together with some, uh, final defense system, as you can imagine, we were deploying in-house, uh, duty independently, especially with the explosion of the AI, with them, uh, blueprint of the storage requests change the law because of what we have, uh, uh, deal dens. And in our case, it was an, I breathed the Isilon solution didn't fit so well for HB for AI. And this is why we, uh, start with the data migration. That was, it was not really migration, but the sort of integration of the power scaler or flash machine inside our, uh, environment, because then the power scale, all flesh and especially, uh, IO in the future, uh, the MVME support, uh, is a key factor for the storage. It just support, uh, we already have experience as some of the, uh, NBME, uh, possibilities, uh, on the power PowerMax so that we have here, uh, that we use part for VDI support, uh, but off, um, or fleshly is the minimum land and EME, uh, is what we need to. >>Gotcha. Talk to me about what Dell technologies has seen the uptick in the demand for this, uh, as Maricio said, they were using Isilon before adding in power scale. What are some of the changing demands that, that Dell technologies has seen and how does technologies like how our scale and the F 900 facilitate these organizations being able to rapidly change their environment so that they can utilize and extract the value from data? >>Yeah, no, absolutely. What occupational intelligence is an area that, uh, continues to amaze me. And, uh, personally I think the, the potential here is immense. Um, uh, as Maurizio said, right, um, the, the data sets, uh, with artificial intelligence, I have, uh, grown significantly and, and not only the data has become, um, uh, become larger the models, the AI models that, that we, that are used have become more complex. Uh, for example, uh, one of the studies suggests that, uh, the, uh, that for a modeling of, uh, natural language processing, um, uh, one of the fields in AI, uh, the number of parameters used, could exceed like about a trillion in, uh, in a few years, right? So almost a size of a human brain. So, so not only that means that there's a lot of fear mounted to be, uh, data, to be processed, but, uh, by, uh, the process stored in yesterday, uh, but probably has to be done in the same amount of Dinah's before, perhaps even a smaller amount of time, right? So a larger data theme time, or perhaps even a smaller amount of time. So, absolutely. I agree. I mean, those type of, for these types of workloads, you need a storage that gives you that high-performance access, but also being able to store the store, that data is economically. >>And how does Dell technologies deliver that? The ability to scale the economics what's unique and differentiated about power skill? >>Uh, so power scale is, is, is our all flash, uh, system it's, uh, it's, uh, it's bad users, dark techno does some of the same capabilities that, uh, Isilon, um, products use used to offer, uh, one of his fault system capabilities, some of the capabilities that Maurizio has used and loved in the past, some of those, some of those same capabilities are brought forward. Now on this spar scale platform, um, there are some changes, like for example, on new Parscale's platform supports Nvidia GPU direct, right? So for, uh, artificial intelligence, uh, workloads, you do need these GPU capable machines. And, uh, and, uh, Parscale supports that those, uh, high high-performance Jupiter rec machines, uh, through, through the different technologies that we offer. And, um, the Parscale F 900, which should, which we are going to launch very soon, um, um, is, is, is our best hype, highest performance all-flash and the most economic allowed slash uh, to date. So, um, so it is, um, it not only is our fastest, but also offers, uh, the most economic, uh, most economical way of storing the data. Um, so, so ideal far for these type of high-performance workloads, like AIML, deep learning and so on. Excellent. >>So talk to me about some of the results that the university is achieving so far. I did read a three X improvement in IO performance. You were able to get nearly a hundred percent of the curriculum online pretty quickly, but talk to me about some of the other impacts that Dell technologies has helping the university to achieve. >>Oh, we had, uh, we had an old, uh, in all the Dell customer, and if you, uh, give a Luca walk, we have that inside the insomnia, our data centers. Uh, we typically joking, we define them as a sort of, uh, Dell technologies supermarket in the sense that, uh, uh, degreed part of our, our servers storage environment comes from, uh, from that technology said several generations of, uh, uh, PowerEdge servers, uh, uh, power, my ex, uh, Isaac along, uh, powers, Gale power store. So we, uh, we are, uh, um, using a lot of, uh, uh, Dell technologies here, here, and of course, uh, um, in the past, uh, our traditional, uh, workloads were well supported by that technologies. And, uh, Dell technologies is, uh, uh, driving ourselves versus, uh, the, what we call the next generation workloads, uh, because we are, uh, uh, combining gas, uh, in, um, in the transition of, uh, um, uh, the next generation of computing there, but to be OPA who, uh, to ask here, and he was walked through our research of looking for, cause if I, if I have to, to, to, to give a look to what we are, uh, doing, uh, mostly here, healthcare workloads, uh, deep learning, uh, uh, data analysis, uh, uh, image analysis in C major extraction that everything have be supported, especially from, uh, the next next generation servers typically keep the, uh, with, with GPU's. >>This is why GPU activities is, is so important for answer, but also, uh, supported on the, on the, on the networking side. But because of that, the, the, the speed, the, and the, of the storage, and must be tired to the next generation networking. Uh, low-latency high-performance because at the end of the day, you have to, uh, to bring the data in storage and DP. Can you do it? Uh, so, uh, they're, uh, one of the low latency, uh, uh, I performance, if they're connected zones is also a side effect of these new work. And of course that the college is, is, is. >>I love how you described your data centers as a Dell technologies supermarket, maybe a different way of talking about a center of excellence question. I want to ask you about, I know that the university of PISA is SCOE for Dell. Talk to me about in the last couple of minutes we have here, what that entails and how Dell helps customers become a center of excellence. >>Yeah, so Dell, um, like talked about has a lot of the Dell Dell products, uh, today, and, and, and in fact, he mentioned about the pirate servers, the power scale F 900 is, is actually based on a forehead server. So, so you can see, so a lot of these technologies are sort of in the linked with each other, they talk to each other, they will work together. Um, and, and, and that sort of helps, helps customers manage the entire, uh, ecosystem lifecycle data, life cycle together, versus as piece parts, because we have solutions that solve all aspects of, of, of the, uh, of, of, uh, of our customer like Mauricio's needs. Right. So, um, so yeah, I'm glad Maurizio is, is leveraging Dell and, um, and I'm happy we are able to help help more issue or solve solve, because, uh, all his use cases, uh, and UN >>Excellent. Maricio last question. Are you going to be using AI machine learning, powered by Dell to determine if the tower of PISA is going to continue to lean, or if it's going to stay where it is? >>Uh, the, the, the leaning tower is, uh, an engineering miracle. Uh, some years ago, uh, an engineering, uh, incredible worker, uh, was able, uh, uh, to fix them. They leaning for a while and let's open up the tower visa, stay there because he will be one of our, uh, beauty that you can come to to visit. >>And that's one part of Italy I haven't been to. So as pandemic, I gotta add that to my travel plans, MERITO and Kaushik. It's been a pleasure talking to you about how Dell is partnering with the university of PISA to really help you power AI machine learning workloads, to facilitate many use cases. We are looking forward to hearing what's next. Thanks for joining me this morning. Thank you for my guests. I'm Lisa Martin. You're watching the cubes coverage of Dell technologies world. The digital event experience.

Published Date : Jun 9 2021

SUMMARY :

We're going to be talking about the university of Piza and how it is leaning into all flash data uh, scientific computing support for these, the main, uh, uh, uh, nineties, uh, we are, uh, Talk to me now about some of the workloads and that are generating massive amounts of data, a lot of remote activities coming from, uh, uh, scientists, the physicists that oil and gas folks doing research to still access that data at the speed that the access to the laboratories, uh, was the of them, uh, Talk to me about the storage impact, because as we know, we talk about, you know, these very demanding, unstructured workloads, uh, Isilon, uh, as a storage infrastructure, uh, together with for this, uh, as Maricio said, they were using Isilon before adding in power that means that there's a lot of fear mounted to be, uh, data, to be processed, but, and the most economic allowed slash uh, to date. a hundred percent of the curriculum online pretty quickly, but talk to me about some of the other impacts the sense that, uh, uh, degreed part of our, they're, uh, one of the low latency, uh, uh, I know that the university of PISA is SCOE for Dell. a lot of the Dell Dell products, uh, today, and, and, if the tower of PISA is going to continue to lean, or if it's going to stay where it is? Uh, the, the, the leaning tower is, uh, an engineering miracle. So as pandemic, I gotta add that to my travel plans,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

MaurizioPERSON

0.99+

MERITOPERSON

0.99+

Maurizio DaviniPERSON

0.99+

MaricioPERSON

0.99+

DebbiePERSON

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

University of PisaORGANIZATION

0.99+

20 departmentsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

two guestsQUANTITY

0.99+

ItalyLOCATION

0.99+

KaushikPERSON

0.99+

PISAORGANIZATION

0.99+

three yearsQUANTITY

0.99+

CAUTI GoshaPERSON

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

F 900COMMERCIAL_ITEM

0.98+

todayDATE

0.98+

MauricioPERSON

0.98+

yesterdayDATE

0.98+

pandemicEVENT

0.98+

3000 facultyQUANTITY

0.98+

about a trillionQUANTITY

0.97+

IsilonORGANIZATION

0.96+

Dell TechnologiesORGANIZATION

0.96+

SCOEORGANIZATION

0.96+

ParscaleORGANIZATION

0.96+

YasORGANIZATION

0.95+

IcelandLOCATION

0.94+

about 50,000QUANTITY

0.94+

ninetiesQUANTITY

0.93+

VMworldORGANIZATION

0.91+

MossORGANIZATION

0.89+

one partQUANTITY

0.88+

JupiterORGANIZATION

0.87+

Kaushik GhoshPERSON

0.87+

CTOPERSON

0.85+

this morningDATE

0.84+

few months agoDATE

0.8+

Gale power storeORGANIZATION

0.79+

hundred percentQUANTITY

0.76+

university of PizaORGANIZATION

0.75+

some years agoDATE

0.75+

university of PISAORGANIZATION

0.71+

Maurizio Davini, University of Pisa and Thierry Pellegrino, Dell Technologies | VMworld 2020


 

>> From around the globe, it's theCUBE, with digital coverage of VMworld 2020, brought to you by the VMworld and its ecosystem partners. >> I'm Stu Miniman, and welcome back to theCUBES coverage of VMworld 2020, our 11th year doing this show, of course, the global virtual event. And what do we love talking about on theCUBE? We love talking to customers. It is a user conference, of course, so really happy to welcome to the program. From the University of Pisa, the Chief Technology Officer Maurizio Davini and joining him is Thierry Pellegrini, one of our theCUBE alumni. He's the vice president of worldwide, I'm sorry, Workload Solutions and HPC with Dell Technologies. Thierry, thank you so much for joining us. >> Thanks too. >> Thanks to you. >> Alright, so let, let's start. The University of Pisa, obviously, you know, everyone knows Pisa, one of the, you know, famous city iconic out there. I know, you know, we all know things in Europe are a little bit longer when you talk about, you know, some of the venerable institutions here in the United States, yeah. It's a, you know, it's a couple of hundred years, you know, how they're using technology and everything. I have to imagine the University of Pisa has a long storied history. So just, if you could start before we dig into all the tech, give us our audience a little bit, you know, if they were looking up on Wikipedia, what's the history of the university? >> So University of Pisa is one of the oldest in the world because there has been founded in 1343 by a pope. We were authorized to do a university teaching by a pope during the latest Middle Ages. So it's really one of the, is not the oldest of course, but the one of the oldest in the world. It has a long history, but as never stopped innovating. So anything in Pisa has always been good for innovating. So either for the teaching or now for the technology applied to a remote teaching or a calculation or scientific computing, So never stop innovating, never try to leverage new technologies and new kind of approach to science and teaching. >> You know, one of your historical teachers Galileo, you know, taught at the university. So, you know, phenomenal history help us understand, you know, you're the CTO there. What does that encompass? How, you know, how many students, you know, are there certain areas of research that are done today before we kind of get into the, you know, the specific use case today? >> So consider that the University of Pisa is a campus in the sense that the university faculties are spread all over the town. Medieval like Pisa poses a lot of problems from the infrastructural point of view. So, we have bought a lot in the past to try to adapt the Medieval town to the latest technologies advancement. Now, we have 50,000 students and consider that Pisa is a general partners university. So, we cover science, like we cover letters in engineering, medicine, and so on. So, during the, the latest 20 years, the university has done a lot of effort to build an infrastructure that was able to develop and deploy the latest technologies for the students. So for example, we have a private fiber network covering all the town, 65 kilometers of a dark fiber that belongs to the university, four data centers, one big and three little center connected today at 200 gigabit ethernet. We have a big data center, big for an Italian University, of course, and not Poland and U.S. university, where is, but also hold infrastructure for the enterprise services and the scientific computing. >> Yep, Maurizio, it's great that you've had that technology foundation. I have to imagine the global pandemic COVID-19 had an impact. What's it been? You know, how's the university dealing with things like work from home and then, you know, Thierry would love your commentary too. >> You know, we, of course we were not ready. So we were eaten by the pandemic and we have to adapt our service software to transform from imperson to remote services. So we did a lot of work, but we are able, thanks to the technology that we have chosen to serve almost a 100% of our curriculum studies program. We did a lot of work in the past to move to virtualization, to enable our users to work for remote, either for a workstation or DC or remote laboratories or remote calculation. So virtualization has designed in the past our services. And of course when we were eaten by the pandemic, we were almost ready to transform our service from in person to remote. >> Yeah, I think it's, it's true, like Maurizio said, nobody really was preparing for this pandemic. And even for, for Dell Technologies, it was an interesting transition. And as you can probably realize a lot of the way that we connect with customers is in person. And we've had to transition over to modes or digitally connecting with customers. We've also spent a lot of our energy trying to help the community HPC and AI community fight the COVID pandemic. We've made some of our own clusters that we use in our HPC and AI innovation center here in Austin available to genomic research or other companies that are fighting the the virus. And it's been an interesting transition. I can't believe that it's already been over six months now, but we've found a new normal. >> Detailed, let's get in specifically to how you're partnering with Dell. You've got a strong background in the HPC space, working with supercomputers. What is it that you're turning to Dell in their ecosystem to help the university with? >> So we are, we have a long history in HPC. Of course, like you can imagine not to the biggest HPC like is done in the U.S. so in the biggest supercomputer center in Europe. We have several system for doing HPC. Traditionally, HPC that are based on a Dell Technologies offer. We typically host all kind of technology's best, but now it's available, of course not in a big scale but in a small, medium scale that we are offering to our researcher, student. We have a strong relationship with Dell Technologies developing together solution to leverage the latest technologies, to the scientific computing, and this has a lot during the research that has been done during this pandemic. >> Yeah, and it's true. I mean, Maurizio is humble, but every time we have new technologies that are to be evaluated, of course we spend time evaluating in our labs, but we make it a point to share that technology with Maurizio and the team at the University of Pisa, That's how we find some of the better usage models for customers, help tuning some configurations, whether it's on the processor side, the GPU side, the storage and the interconnect. And then the topic of today, of course, with our partners at VMware, we've had some really great advancements Maurizio and the team are what we call a center of excellence. We have a few of them across the world where we have a unique relationship sharing technology and collaborating on advancement. And recently Maurizio and the team have even become one of the VMware certified centers. So it's a great marriage for this new world where virtual is becoming the norm. >> But well, Thierry, you and I had a conversation to talk earlier in the year when VMware was really geering their full kind of GPU suite and, you know, big topic in the keynote, you know, Jensen, the CEO of Nvidia was up on stage. VMware was talking a lot about AI solutions and how this is going to help. So help us bring us in you work with a lot of the customers theory. What is it that this enables for them and how to, you know, Dell and VMware bring, bring those solutions to bear? >> Yes, absolutely. It's one statistic I'll start with. Can you believe that only on average, 15 to 20% of GPU are fully utilized? So, when you think about the amount of technology that's are at our fingertips and especially in a world today where we need that technology to advance research and scientistic discoveries. Wouldn't it be fantastic to utilize those GPU's to the best of our ability? And it's not just GPU's , I think the industry has in the IT world, leverage virtualization to get to the maximum recycles for CPU's and storage and networking. Now you're bringing the GPU in the fold and you have a perfect utilization and also flexibility across all those resources. So what we've seen is that convergence between the IT world that was highly virtualized, and then this highly optimized world of HPC and AI because of the resources out there and researchers, but also data scientists and company want to be able to run their day to day activities on that infrastructure. But then when they have a big surge need for research or a data science use that same environment and then seamlessly move things around workload wise. >> Yeah, okay I do believe your stat. You know, the joke we always have is, you know, anybody from a networking background, there's no such thing as eliminating a bottleneck, you just move it. And if you talk about utilization, we've been playing the shell game for my entire career of, let's try to optimize one thing and then, oh, there's something else that we're not doing. So,you know, so important. Retail, I want to hear from your standpoint, you know, virtualization and HPC, you know, AI type of uses there. What value does this bring to you and, you know, and key learnings you've had in your organization? >> So, we as a university are a big users of the VMware technologies starting from the traditional enterprise workload and VPI. We started from there in the sense that we have an installation quite significant. But also almost all the services that the university gives to our internal users, either personnel or staff or students. At a certain point that we decided to try to understand the, if a VMware virtualization would be good also for scientific computing. Why? Because at the end of the day, their request that we have from our internal users is flexibility. Flexibility in the sense of be fast in deploying, be fast to reconfiguring, try to have the latest beats on the software side, especially on the AI research. At the end of the day we designed a VMware solution like you, I can say like a whiteboard. We have a whiteboard, and we are able to design a new solution of this whiteboard and to deploy as fast as possible. Okay, what we face as IT is not a request of the maximum performance. Our researchers ask us for flexibility then, and want to be able to have the maximum possible flexibility in configuring the systems. How can I say I, we can deploy as more test cluster on the visual infrastructure in minutes or we can use GPU inside the infrastructure tests, of test of new algorithm for deep learning. And we can use faster storage inside the virtualization to see how certain algorithm would vary with our internal developer can leverage the latest, the beat in storage like NVME, MVMS or so. And this is why at the certain point, we decided to try visualization as a base for HPC and scientific computing, and we are happy. >> Yeah, I think Maurizio described it it's flexibility. And of course, if you think optimal performance, you're looking at the bare medal, but in this day and age, as I stated at the beginning, there's so much technology, so much infrastructure available that flexibility at times trump the raw performance. So, when you have two different research departments, two different portions, two different parts of the company looking for an environment. No two environments are going to be exactly the same. So you have to be flexible in how you aggregate the different components of the infrastructure. And then think about today it's actually fantastic. Maurizio was sharing with me earlier this year, that at some point, as we all know, there was a lot down. You could really get into a data center and move different cables around or reconfigure servers to have the right ratio of memory, to CPU, to storage, to accelerators, and having been at the forefront of this enablement has really benefited University of Pisa and given them that flexibility that they really need. >> Wonderful, well, Maurizio my understanding, I believe you're giving a presentation as part of the activities this week. Give us a final glimpses to, you know, what you want your peers to be taking away from what you've done? >> What we have done that is something that is very simple in the sense that we adapt some open source software to our infrastructure in order to enable our system managers and users to deploy HPC and AI solution fastly and in an easy way to our VMware infrastructure. We started doing a sort of POC. We designed the test infrastructure early this year and then we go fastly to production because we had about the results. And so this is what we present in the sense that you can have a lot of way to deploy Vitola HPC, Barto. We went for a simple and open source solution. Also, thanks to our friends of Dell Technologies in some parts that enabled us to do the works and now to go in production. And that's theory told before you talked to has a lot during the pandemic due to the effect that stay at home >> Wonderful, Thierry, I'll let you have the final word. What things are you drawing customers to, to really dig in? Obviously there's a cost savings, or are there any other things that this unlocks for them? >> Yeah, I mean, cost savings. We talked about flexibility. We talked about utilization. You don't want to have a lot of infrastructure sitting there and just waiting for a job to come in once every two months. And then there's also the world we live in, and we all live our life here through a video conference, or at times through the interface of our phone and being able to have this web based interaction with a lot of infrastructure. And at times the best infrastructure in the world, makes things simpler, easier, and hopefully bring science at the finger tip of data scientists without having to worry about knowing every single detail on how to build up that infrastructure. And with the help of the University of Pisa, one of our centers of excellence in Europe, we've been innovating and everything that's been accomplished for, you know at Pisa can be accomplished by our customers and our partners around the world. >> Thierry, Maurizio, thank you much for so much for sharing and congratulations on all I know you've done building up that COE. >> Thanks to you. >> Thank you. >> Stay with us, lots more covered from VMworld 2020. I'm Stu Miniman as always. Thank you for watching the theCUBE. (soft music)

Published Date : Sep 30 2020

SUMMARY :

brought to you by the VMworld of course, the global virtual event. here in the United States, yeah. So either for the teaching or you know, you're the CTO there. So consider that the University of Pisa and then, you know, Thierry in the past our services. that are fighting the the virus. background in the HPC space, so in the biggest Maurizio and the team are the keynote, you know, Jensen, because of the resources You know, the joke we in the sense that we have an and having been at the as part of the activities this week. and now to go in production. What things are you drawing and our partners around the world. Thierry, Maurizio, thank you much Thank you for watching the theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaurizioPERSON

0.99+

ThierryPERSON

0.99+

Thierry PellegriniPERSON

0.99+

EuropeLOCATION

0.99+

15QUANTITY

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

AustinLOCATION

0.99+

Stu MinimanPERSON

0.99+

University of PisaORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

JensenPERSON

0.99+

Maurizio DaviniPERSON

0.99+

1343DATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

United StatesLOCATION

0.99+

65 kilometersQUANTITY

0.99+

50,000 studentsQUANTITY

0.99+

U.S.LOCATION

0.99+

200 gigabitQUANTITY

0.99+

PisaLOCATION

0.99+

three little centerQUANTITY

0.99+

GalileoPERSON

0.99+

todayDATE

0.99+

11th yearQUANTITY

0.99+

VMworld 2020EVENT

0.99+

over six monthsQUANTITY

0.99+

20%QUANTITY

0.98+

oneQUANTITY

0.98+

two different partsQUANTITY

0.97+

Thierry PellegrinoPERSON

0.97+

pandemicEVENT

0.97+

four data centersQUANTITY

0.96+

one bigQUANTITY

0.96+

earlier this yearDATE

0.96+

this weekDATE

0.96+

Middle AgesDATE

0.96+

COVID pandemicEVENT

0.96+

theCUBEORGANIZATION

0.95+

VMworldORGANIZATION

0.95+

100%QUANTITY

0.95+

early this yearDATE

0.95+

20 yearsQUANTITY

0.91+

HPCORGANIZATION

0.9+

two different research departmentsQUANTITY

0.9+

two different portionsQUANTITY

0.89+

PolandLOCATION

0.88+

one thingQUANTITY

0.87+

WikipediaORGANIZATION

0.86+

John Fanelli and Maurizio Davini Dell Technologies | CUBE Conversation, October 2021


 

>>Yeah. >>Hello. Welcome to the Special Cube conversation here in Palo Alto, California. I'm John for a host of the Cube. We have a conversation around a I for the enterprise. What this means I got two great guests. John Finelli, Vice President, virtual GPU at NVIDIA and Maurizio D V D C T o University of Pisa in Italy. Uh, Practitioner, customer partner, um, got VM world coming up. A lot of action happening in the enterprise. John. Great to see you. Nice to meet you. Remotely coming in from Italy for this remote. >>John. Thanks for having us on again. >>Yeah. Nice to meet >>you. I wish we could be in person face to face, but that's coming soon. Hopefully, John, you were talking. We were just talking about before we came on camera about AI for the enterprise. And the last time I saw you in person was in Cuba interview. We were talking about some of the work you guys were doing in AI. It's gotten so much stronger and broader and the execution of an video, the success you're having set the table for us. What is the ai for the enterprise conversation frame? >>Sure. So, um, we, uh we've been working with enterprises today on how they can deliver a I or explore AI or get involved in a I, um uh, in a standard way in the way that they're used to managing and operating their data centre. Um, writing on top of you know, they're Dell servers with B M or V sphere. Um, so that AI feels like a standard workload that night organisation can deliver to their engineers and data scientists. And then the flip side of that, of course, is ensuring that engineers and data scientists get the workloads position to them or have access to them in the way that they need them. So it's no longer a trouble ticket that you have to submit to, I t and you know, count the hours or days or weeks until you you can get new hardware, right By being able to pull it into the mainstream data centre. I can enable self service provisioning for those folks. So we actually we make a I more consumable or easier to manage for I t administrators and then for the engineers and the data scientists, etcetera. We make it easy for them to get access to those resources so they can get to their work right away. >>Quite progress in the past two years. Congratulations on that and looking. It's only the beginning is Day one Mercy. I want to ask you about what's going on as the CTO University piece of what's happening down there. Tell us a little bit about what's going on. You have the centre of excellence there. What does that mean? What does that include? >>Uh, you know, uh, University of Peace. Are you one of one of the biggest and oldest in Italy? Uh, if you have to give you some numbers is around 50 K students and 3000 staff between, uh, professors resurgence and that cabinet receive staff. So I we are looking into data operation of the centres and especially supports for scientific computing. And, uh, this is our our daily work. Let's say this, uh, taking us a lot of times, but, you know, we are able to, uh, reserve a merchant percentage of our time, Uh, for r and D, And this is where the centre of excellence is, Uh, is coming out. Uh, so we are always looking into new kinds of technologies that we can put together to build new solutions to do next generation computing gas. We always say we are looking for the right partners to do things together. And at the end of the day is the work that is good for us is good for our partners and typically, uh, ends in a production system for our university. So is the evolution of the scientific computing environment that we have. >>Yeah. And you guys have a great track record and reputation of, you know, R and D, testing software, hardware combinations and sharing those best practises, you know, with covid impact in the world. Certainly we see it on the supply chain side. Uh, and John, we heard Jensen, your CEO and video talk multiple keynotes. Now about software, uh, and video being a software company. Dell, you mentioned Dale and VM Ware. You know, Covid has brought this virtualisation world back. And now hybrid. Those are words that we used basically in the text industry. Now it's you're hearing hybrid and virtualisation kicked around in real world. So it's ironic that vm ware and El, uh, and the Cube eventually all of us together doing more virtual stuff. So with covid impacting the world, how does that change you guys? Because software is more important. You gotta leverage the hardware you got, Whether it's Dell or in the cloud, this is a huge change. >>Yeah. So, uh, as you mentioned organisations and enterprises, you know, they're looking at things differently now, Um, you know, the idea of hybrid. You know, when you talk to tech folks and we think about hybrid, we always think about you know, how the different technology works. Um, what we're hearing from customers is hybrid, you know, effectively translates into, you know, two days in the office, three days remote, you know, in the future when they actually start going back to the office. So hybrid work is actually driving the need for hybrid I t. Or or the ability to share resources more effectively. Um, And to think about having resources wherever you are, whether you're working from home or you're in the office that day, you need to have access to the same resources. And that's where you know the the ability to virtualize those resources and provide that access makes that hybrid part seamless >>mercy What's your world has really changed. You have students and faculty. You know, Things used to be easy in the old days. Physical in this network. That network now virtual there. You must really be having him having impact. >>Yeah, we have. We have. Of course. As you can imagine, a big impact, Uh, in any kind of the i t offering, uh, from, uh, design new networking technologies, deploying new networking technologies, uh, new kind of operation we find. We found it at them. We were not able anymore to do burr metal operations directly, but, uh, from the i t point of view, uh, we were how can I say prepared in the sense that, uh, we ran from three or four years parallel, uh, environment. We have bare metal and virtual. So as you can imagine, traditional bare metal HPC cluster D g d g X machines, uh, multi GPU s and so on. But in parallel, we have developed, uh, visual environment that at the beginning was, as you can imagine, used, uh, for traditional enterprise application, or VD. I, uh, we have a significant significant arise on a farm with the grid for remote desktop remote pull station that we are using for, for example, uh, developing a virtual classroom or visual go stations. And so this is was typical the typical operation that we did the individual world. But in the same infrastructure, we were able to develop first HPC individual borders of utilisation of the HPC resources for our researchers and, uh, at the end, ai ai offering and ai, uh, software for our for our researchers, you can imagine our vehicle infrastructure as a sort of white board where we are able to design new solution, uh, in a fast way without losing too much performance. And in the case of the AI, we will see that we the performance are almost the same at the bare metal. But with all the flexibility that we needed in the covid 19 world and in the future world, too. >>So a couple things that I want to get John's thoughts as well performance you mentioned you mentioned hybrid virtual. How does VM Ware and NVIDIA fit into all this as you put this together, okay, because you bring up performance. That's now table stakes. He's leading scale and performance are really on the table. everyone's looking at it. How does VM ware an NVIDIA John fit in with the university's work? >>Sure. So, um, I think you're right when it comes to, uh, you know, enterprises or mainstream enterprises beginning their initial foray into into a I, um there are, of course, as performance in scale and also kind of ease of use and familiarity are all kind of things that come into play in terms of when an enterprise starts to think about it. And, um, we have a history with VM Ware working on this technology. So in 2019, we introduced our virtual compute server with VM Ware, which allowed us to effectively virtual is the Cuda Compute driver at last year's VM World in 2020 the CEOs of both companies got together and made an announcement that we were going to bring a I R entire video AI platform to the Enterprise on top of the sphere. And we did that, Um, starting in March this year, we we we finalise that with the introduction of GM wears V, Sphere seven, update two and the early access at the time of NVIDIA ai Enterprise. And, um, we have now gone to production with both of those products. And so customers, Um, like the University of Pisa are now using our production capabilities. And, um, whenever you virtualize in particular and in something like a I where performances is really important. Um, the first question that comes up is, uh doesn't work and And how quickly does it work Or or, you know, from an I t audience? A lot of times you get the How much did it slow down? And and and so we We've worked really closely from an NVIDIA software perspective and a bm wear perspective. And we really talk about in media enterprise with these fair seven as optimist, certified and supported. And the net of that is, we've been able to run the standard industry benchmarks for single node as well as multi note performance, with about maybe potentially a 2% degradation in performance, depending on the workload. Of course, it's very different, but but effectively being able to trade that performance for the accessibility, the ease of use, um, and even using things like we realise, automation for self service for the data scientists, Um and so that's kind of how we've been pulling it together for the market. >>Great stuff. Well, I got to ask you. I mean, people have that reaction of about the performance. I think you're being polite. Um, around how you said that shows the expectation. It's kind of sceptical, uh, and so I got to ask you, the impact of this is pretty significant. What is it now that customers can do that? They couldn't or couldn't feel they had before? Because if the expectations as well as it worked well, I mean, there's a fast means. It works, but like performance is always concerned. What's different now? What what's the bottom line impact on what country do now that they couldn't do before. >>So the bottom line impact is that AI is now accessible for the enterprise across there. Called their mainstream data centre, enterprises typically use consistent building blocks like the Dell VX rail products, right where they have to use servers that are common standard across the data centre. And now, with NVIDIA Enterprise and B M R V sphere, they're able to manage their AI in the same way that they're used to managing their data centre today. So there's no retraining. There's no separate clusters. There isn't like a shadow I t. So this really allows an enterprise to efficiently deploy um, and cost effectively Deploy it, uh, it without because there's no performance degradation without compromising what their their their data scientists and researchers are looking for. And then the flip side is for the data science and researcher, um, using some of the self service automation that I spoke about earlier, they're able to get a virtual machine today that maybe as a half a GPU as their models grow, they do more exploring. They might get a full GPU or or to GPS in a virtual machine. And their environment doesn't change because it's all connected to the back end storage. And so for the for the developer and the researcher, um, it makes it seamless. So it's really kind of a win for both Nike and for the user. And again, University of Pisa is doing some amazing things in terms of the workloads that they're doing, Um, and, uh and, uh, and are validating that performance. >>Weigh in on this. Share your opinion on or your reaction to that, What you can do now that you couldn't do before. Could you share your experience? >>Our experience is, uh, of course, if you if you go to your, uh, data scientists or researchers, the idea of, uh, sacrificing four months to flexibility at the beginning is not so well accepted. It's okay for, uh, for the Eid management, As John was saying, you have people that is know how to deal with the virtual infrastructure, so nothing changed for them. But at the end of the day, we were able to, uh, uh, test with our data. Scientists are researchers veteran The performance of us almost similar around really 95% of the performance for the internal developer developer to our work clothes. So we are not dealing with benchmarks. We have some, uh, work clothes that are internally developed and apply to healthcare music generator or some other strange project that we have inside and were able to show that the performance on the beautiful and their metal world were almost the same. We, the addition that individual world, you are much more flexible. You are able to reconfigure every finger very fast. You are able to design solution for your researcher, uh, in a more flexible way. An effective way we are. We were able to use the latest technologies from Dell Technologies and Vidia. You can imagine from the latest power edge the latest cuts from NVIDIA. The latest network cards from NVIDIA, like the blue Field to the latest, uh, switches to set up an infrastructure that at the end of the day is our winning platform for our that aside, >>a great collaboration. Congratulations. Exciting. Um, get the latest and greatest and and get the new benchmarks out their new playbooks. New best practises. I do have to ask you marriage, if you don't mind me asking why Look at virtualizing ai workloads. What's the motivation? Why did you look at virtualizing ai work clothes? >>Oh, for the sake of flexibility Because, you know, uh, in the latest couple of years, the ai resources are never enough. So we are. If you go after the bare metal, uh, installation, you are going into, uh, a world that is developing very fastly. But of course, you can afford all the bare metal, uh, infrastructure that your data scientists are asking for. So, uh, we decided to integrate our view. Dual infrastructure with AI, uh, resources in order to be able to, uh, use in different ways in a more flexible way. Of course. Uh, we have a We have a two parallels world. We still have a bare metal infrastructure. We are growing the bare metal infrastructure. But at the same time, we are growing our vehicle infrastructure because it's flexible, because we because our our stuff, people are happy about how the platform behaviour and they know how to deal them so they don't have to learn anything new. So it's a sort of comfort zone for everybody. >>I mean, no one ever got hurt virtualizing things that makes it makes things go better faster building on on that workloads. John, I gotta ask you, you're on the end video side. You You see this real up close than video? Why do people look at virtualizing ai workloads is the unification benefit. I mean, ai implies a lot of things, implies you have access to data. It implies that silos don't exist. I mean, that doesn't mean that's hard. I mean, is this real people actually looking at this? How is it working? >>Yeah. So? So again, um you know for all the benefits and activity today AI brings a I can be pretty complex, right? It's complex software to set up and to manage. And, um, within the day I enterprise, we're really focusing in on ensuring that it's easier for organisations to use. For example Um, you know, I mentioned you know, we we had introduced a virtual compute server bcs, um uh, two years ago and and that that has seen some some really interesting adoption. Some, uh, enterprise use cases. But what we found is that at the driver level, um, it still wasn't accessible for the majority of enterprises. And so what we've done is we've built upon that with NVIDIA Enterprise and we're bringing in pre built containers that remove some of the complexities. You know, AI has a lot of open source components and trying to ensure that all the open source dependencies are resolved so you can get the AI developers and researchers and data scientists. Actually doing their work can be complex. And so what we've done is we've brought these pre built containers that allow you to do everything from your initial data preparation data science, using things like video rapids, um, to do your training, using pytorch and tensorflow to optimise those models using tensor rt and then to deploy them using what we call in video Triton Server Inference in server. Really helping that ai loop become accessible, that ai workflow as something that an enterprise can manage as part of their common core infrastructure >>having the performance and the tools available? It's just a huge godsend people love. That only makes them more productive and again scales of existing stuff. Okay, great stuff. Great insight. I have to ask, What's next one's collaboration? This is one of those better together situations. It's working. Um, Mauricio, what's next for your collaboration with Dell VM Ware and video? >>We will not be for sure. We will not stop here. Uh, we are just starting working on new things, looking for new development, uh, looking for the next beast. Come, uh, you know, the digital world is something that is moving very fast. Uh, and we are We will not We will not stop here because because they, um the outcome of this work has been a very big for for our research group. And what John was saying This the fact that all the software stock for AI are simplified is something that has been, uh, accepted. Very well, of course you can imagine researching is developing new things. But for people that needs, uh, integrated workflow. The work that NVIDIA has done in the development of software package in developing containers, that gives the end user, uh, the capabilities of running their workloads is really something that some years ago it was unbelievable. Now, everything is really is really easy to manage. >>John mentioned open source, obviously a big part of this. What are you going to? Quick, Quick follow if you don't mind. Are you going to share your results so people can can look at this so they can have an easier path to AI? >>Oh, yes, of course. All the all the work, The work that is done at an ideal level from University of Visa is here to be shared. So we we as, uh, as much as we have time to write down we are. We are trying to find a way to share the results of the work that we're doing with our partner, Dell and NVIDIA. So for sure will be shared >>well, except we'll get that link in the comments, John, your thoughts. Final thoughts on the on the on the collaboration, uh, with the University of Pisa and Delvian, where in the video is is all go next? >>Sure. So So with University of Pisa, We're you know, we're absolutely, uh, you know, grateful to Morocco and his team for the work they're doing and the feedback they're sharing with us. Um, we're learning a lot from them in terms of things we can do better and things that we can add to the product. So that's a fantastic collaboration. Um, I believe that Mauricio has a session at the M World. So if you want to actually learn about some of the workloads, um, you know, they're doing, like, music generation. They're doing, you know, covid 19 research. They're doing deep, multi level, uh, deep learning training. So there's some really interesting work there, and so we want to continue that partnership. University of Pisa, um, again, across all four of us, uh, university, NVIDIA, Dell and VM Ware. And then on the tech side, you know, for our enterprise customers, um, you know, one of the things that we actually didn't speak much about was, um I mentioned that the product is optimised certified and supported, and I think that support cannot be understated. Right? So as enterprises start to move into these new areas, they want to know that they can pick up the phone and call in video or VM ware. Adele, and they're going to get support for these new workloads as they're running them. Um, we were also continuing, uh, you know, to to think about we spent a lot of time today on, like, the developer side of things and developing ai. But the flip side of that, of course, is that when those ai apps are available or ai enhanced apps, right, Pretty much every enterprise app today is adding a I capabilities all of our partners in the enterprise software space and so you can think of a beady eye enterprises having a runtime component so that as you deploy your applications into the data centre, they're going to be automatically take advantage of the GPS that you have there. And so we're seeing this, uh, future as you're talking about the collaboration going forward, where the standard data centre building block still maintains and is going to be something like a VX rail two U server. But instead of just being CPU storage and RAM, they're all going to go with CPU, GPU, storage and RAM. And that's going to be the norm. And every enterprise application is going to be infused with AI and be able to take advantage of GPS in that scenario. >>Great stuff, ai for the enterprise. This is a great QB conversation. Just the beginning. We'll be having more of these virtualizing ai workloads is real impacts data scientists impacts that compute the edge, all aspects of the new environment we're all living in. John. Great to see you, Maurizio here to meet you and all the way in Italy looking for the meeting in person and good luck in your session. I just got a note here on the session. It's at VM World. Uh, it's session 22 63 I believe, um And so if anyone's watching, Want to check that out? Um, love to hear more. Thanks for coming on. Appreciate it. >>Thanks for having us. Thanks to >>its acute conversation. I'm John for your host. Thanks for watching. We'll talk to you soon. Yeah,

Published Date : Oct 5 2021

SUMMARY :

I'm John for a host of the Cube. And the last time I saw you in person was in Cuba interview. of course, is ensuring that engineers and data scientists get the workloads position to them You have the centre of excellence there. of the scientific computing environment that we have. You gotta leverage the hardware you got, actually driving the need for hybrid I t. Or or the ability to Physical in this network. And in the case of the AI, we will see that we So a couple things that I want to get John's thoughts as well performance you mentioned the ease of use, um, and even using things like we realise, automation for self I mean, people have that reaction of about the performance. And so for the for the developer and the researcher, What you can do now that you couldn't do before. The latest network cards from NVIDIA, like the blue Field to the I do have to ask you marriage, if you don't mind me asking why Look at virtualizing ai workloads. Oh, for the sake of flexibility Because, you know, uh, I mean, ai implies a lot of things, implies you have access to data. And so what we've done is we've brought these pre built containers that allow you to do having the performance and the tools available? that gives the end user, uh, Are you going to share your results so people can can look at this so they can have share the results of the work that we're doing with our partner, Dell and NVIDIA. the collaboration, uh, with the University of Pisa and Delvian, all of our partners in the enterprise software space and so you can think of a beady eye enterprises scientists impacts that compute the edge, all aspects of the new environment Thanks to We'll talk to you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

NVIDIAORGANIZATION

0.99+

University of VisaORGANIZATION

0.99+

MaurizioPERSON

0.99+

MauricioPERSON

0.99+

October 2021DATE

0.99+

DellORGANIZATION

0.99+

ItalyLOCATION

0.99+

John FinelliPERSON

0.99+

2019DATE

0.99+

John FanelliPERSON

0.99+

AdelePERSON

0.99+

2020DATE

0.99+

University of PisaORGANIZATION

0.99+

threeQUANTITY

0.99+

2%QUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

CubaLOCATION

0.99+

VidiaORGANIZATION

0.99+

CTO UniversityORGANIZATION

0.99+

two daysQUANTITY

0.99+

three daysQUANTITY

0.99+

NikeORGANIZATION

0.99+

March this yearDATE

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

VM WareTITLE

0.99+

four yearsQUANTITY

0.99+

both companiesQUANTITY

0.99+

3000 staffQUANTITY

0.99+

last yearDATE

0.99+

two years agoDATE

0.98+

Maurizio DaviniPERSON

0.98+

VM WareORGANIZATION

0.98+

todayDATE

0.97+

VXCOMMERCIAL_ITEM

0.97+

GMORGANIZATION

0.97+

four monthsQUANTITY

0.97+

VM wareTITLE

0.96+

two great guestsQUANTITY

0.96+

oneQUANTITY

0.95+

22 63OTHER

0.95+

DaleORGANIZATION

0.95+

M WorldORGANIZATION

0.95+

two parallelsQUANTITY

0.95+

fourQUANTITY

0.94+

around 50 K studentsQUANTITY

0.93+

JensenPERSON

0.93+

University of PeaceORGANIZATION

0.91+

firstQUANTITY

0.91+

95%QUANTITY

0.9+

VM WorldORGANIZATION

0.89+

VM WorldEVENT

0.89+

WareORGANIZATION

0.86+

Maurizio Davini & Kaushik Ghosh | CUBE Conversation, May 2021


 

(upbeat music) >> Hi, Lisa Martin here with theCUBE. You're watching our coverage of Dell Technologies World, the Digital Virtual Experience. I've got two guests with me here today. We're going to be talking about the University of Pisa and how it is leaning into all flash deal that is powered by Dell Technologies. One of our alumni is back, Maurizio Davini, the CTO of the University of Pisa. Maurizio, welcome back to theCUBE. >> Thank you. You're always welcome. >> Very excited to talk to you today. Kaushik Ghosh is here as well, The Director of Product Management at Dell Technologies. Kaushik, welcome to theCUBE. >> Thank you. >> So here we are at this virtual event again. Maurizio, you were last on theCUBE at VM world a few months ago, the virtual experience as well. But talk to our audience a little bit, before we dig into the technology and some of these demanding workloads that the University is utilizing, talk to me a little bit about your role as CTO and about the University. >> So my role as CTO at University of Pisa is regarding the data center operations and scientific computing support. It is the main occupation that I have. Then I support also, the technological choices That the University of Pisa is doing during the latest two or three years. >> Talk to me about something, so this is in terms of students, we're talking about 50,000 or so students, 3000 faculty and the campus is distributed around the town of Pisa. Is that correct, Maurizio? >> The University of Pisa is sort of a town campus in the sense that we have 20 departments that are located inside the medieval town, but due to the choices that University of Pisa has done in the last '90s, we are owner of a private fiber network connecting all our departments and all our (indistinct). And so we can use the town as a sort of white board to design new services, new kind of support for teaching and so on. >> So you've really modernized the data infrastructure for the University that was founded in the middle ages. Talk to me now about some of the workloads, Maurizio, that are generating massive amounts of data and then we'll get into what you're doing with Dell Technologies. >> Oh, so the University of Pisa has a quite old historian HPC, traditional HPC. So we are supporting the traditional workloads from CAE or engineering or chemistry or oil and gas simulations. Of course, during the pandemic year, last year especially, we have new kind of workload scan, some related to the fast movement of the HPC workload from let's say, traditional HPC to AI and machine learning. And also, they request to support a lot of remote activities coming from distance learning to remotize laboratories or stations or whatever, most elder in presence in the past. And so the impact either on the infrastructure or, and especially on the storage part, was significant. >> So you talked about utilizing the high performance computing environments for a while and for scientific computing and things, I saw a case study that you guys have done with Dell, but then during the pandemic, the challenge and the use case of remote learning brought additional challenges to your environment. From that perspective, how were you able to transfer your curriculum to online and enable the scientists, the physicists, the oil and gas folks doing research to still access that data at the speed that they needed to? >> You know, for what you got distance learning, of course, we were based on cloud services that were not provided internally by us. So we based on Microsoft services, on Google services and so on. But what regards internal support, scientific computing was completely remotized, either on support or experience, because how can I bring some examples? For example, laboratory activities were remotized. The access to the laboratories was (indistinct) remote as much as possible. We designed a special network to connect all the laboratories and to give the researcher the possibility of accessing the data on this special network. So a sort of a collector of data inside our university network. You can imagine that... Utilization, for example, was a key factor for us because utilization was, for us, a flexible way to deliver new services in an easy way, especially, if you have to administer systems for remote. So as I told you before about the network as a white board, also, the computer infrastructure was (indistinct) utilization treated as a sort of (indistinct). We were designing new services, either for interactive services, or especially for scientific computing. For example, we have an experience with utilization of HPC workload, storage and so on. >> Talk to me about the storage impact because as we know, we talk about these very demanding unstructured workloads, AI, machine learning, and those are difficult for most storage systems to handle. Maurizio, talk to us about why you leaned into all flash with Dell Technologies and talk to us a little bit about the technologies that you've implemented. >> So if I have to think about our storage infrastructure before the pandemic, I have to think about Isilon, because our HPC workloads was mainly based off Isilon as a storage infrastructure. Together, with some final defense system, as you can imagine, we were deploying in our homes. During the pandemic, but especially with the explosion of the AI, the blueprint of the storage requests changed a lot because what we had until then, and in our case, was an hybrid Isilon solution. Didn't fit so well for HB, for AI (indistinct) and this is why we started the migration. It was not really migration, but the sort of integration of the Power Scale or flash machine inside our environment, because then the Power Scale or flash, and especially, I hope in the future, the MVME support is a key factor for the storage, storage support. We already have experienced some of the MVME possibilities on the Power Max that we have here that we use (indistinct) and part for VDI support, but flash is the minimum and MVME is what we need to support in the right way the AI workloads. >> Lisa: Kaushik, talk to me about what Dell Technologies has seen. The optic the demand for this. As Maurizio said, they were using Isilon before, adding in Power Scale. What are some of the changing demands that Dell technologies has seen and how does technologies like Power Scale and the F900 facilitate these organizations being able to rapidly change their environment so that they can utilize and extract the value from data? >> Yeah, no, absolutely. Artificial intelligence is an area that continues to amaze me and personally, I think the potential here is immense. As Maurizio said, right? The data sets with artificial intelligence have grown significantly, and not only the data has become larger, the models, the AI models that are used have become more complex. For example, one of the studies suggests that for a modeling of natural language processing, one of the fields in AI, the number of parameters used could exceed like a trillion in a few years, right? So almost the size of a human brain. So not only that means that there's a lot of data to be processed, but the process stored ingested, but probably has to be done in the same amount of time as before or perhaps even a smaller amount of time, right? So larger data, same time, or perhaps even a smaller amount of time. So, absolutely, I agree. For these types of workloads, you need a storage that gives you that high-performance access, but also being able to store that data economically. >> Lisa: And Kaushik, how does Dell technologies deliver that? The ability to scale the economics. What's unique and differentiated about Power Scale? >> So Power Scale is our all flash system. It uses some of the same capabilities that Isilon products used to offer. The 1 FS file system capabilities. Some of the same capabilities that (indistinct) has used and loved in the past. So some of those same capabilities are brought forward now. on this Power Scale platform. There are some changes, like for example, our new Power Scale platform supports NVDR GPU direct, right? So for artificial intelligence workloads, you do need these GPU capable machines and Power Scale supports those high-performance GPU direct machines through the different technologies that we offer, and the Power Scale F 900, which we are going to launch very soon is our best highest performance all flash and the most economical all flash to date. So it not only is our fastest, but also offers the most economical way of storing the data. So ideal for these type of high-performance workloads, like AIML, deep learning and so on. >> Excellent. Maurizio, talk to me about some of the results that the University is achieving so far. I did read a three X improvement in IO performance. You were able to get nearly a hundred percent of the curriculum online pretty quickly, but talk to me about some of the other impacts that Dell technologies is helping the University to achieve. >> Oh, we are an old Dell customer and if you give a look what we have inside our data centers, we typically joking. We define as a sort of Dell technologies supermarket in the sense that the great part of our servers storage environment comes from Dell technology. Several generations of Power Edge servers, Power Max, Isilon, Power Scale, Power Sore. So we are using a lot of Dell technologies here, and of course, in the past, our traditional workloads were well supported by Dell technologies. And Dell technologies is driving us versus what we call the next generation workloads, because they are accompanying us in the transition versus the next generation computing, but to hope to adhere and (indistinct) to our researchers are looking for, because if I had to give a look to what we are doing mostly here, healthcare workloads, deep learning, data analysis, image analysis, same major extraction. Everything have to be supported, especially from the next generation servers, typically to keep with GPUs. This is why GPU direct is so important for us, but also, supported on the networking side, because the speed of the storage must be tied to the next generation networking. Low latency, high performance, because at the end of the day, you have to bring the data to the storage room, and typically, you do it by importing it. So they're one of the low latency, high performance interconnections. Zones is also a side effect of this new (indistinct). And of course, Dell Technologies is with us in this transition. >> I loved how you described your data centers as a Dell Technologies supermarket. Maybe a different way of talking about a center of excellence. Kaushik, I want to ask you about... I know that the University of Pisa is a SCOE for Dell. Talk to me about, in the last couple of minutes we have here, what that entails and how Dell helps customers become a center of excellence. >> Yeah. So Dell, like Maurizio has talked about, has a lot of the Dell products today. And in fact, he mentioned about the powered servers, the Power Scale F 900 is actually based on a powered server. So you can see. So a lot of these technologies are sort of interlinked with each other. They talk to each other, they work together and that sort of helps customers manage their entire ecosystem life cycle, data life cycle together versus as piece spots, because we have solutions that solve all aspects of our customer, like Maurizio's needs, right? So, yeah, I'm glad Maurizio is leveraging Dell and I'm happy we are able to help Maurizio solve all his use cases and when. >> Lisa: Excellent. Maurizio, last question, are you going to be using AI machine learning powered by Dell to determine if the tower of Pisa is going to continue to lean or if it's going to stay where it is? >> The leaning tower is an engineering miracle. Some years ago, an incredible engineering worker was able to fix the leaning for a while, and let's hope that the tower of Pisa stay there because it's one of our beauty that you can come to visit. >> And that's one part of Italy I haven't been to. So post pandemic, I got to add that to my travel plans. Maurizio and Kaushik, it's been a pleasure talking to you about how Dell is partnering with the University of Pisa to really help you power AI machine learning workloads to facilitate many use cases. We are looking forward to hearing what's next. Thanks for joining me this morning. >> Kaushik: Thank you. >> Maurizio: Thank you. For my guests, I'm Lisa Martin. You're watching theCUBE's coverage of Dell technologies world, the digital event experience. (upbeat music)

Published Date : Apr 27 2021

SUMMARY :

about the University of Pisa Thank you. Very excited to talk to you today. that the University is utilizing, It is the main occupation that I have. and the campus is distributed in the sense that we have 20 departments of the workloads, Maurizio, and especially on the storage the speed that they needed to? of accessing the data about the technologies and especially, I hope in the future, and the F900 facilitate and not only the data has become larger, The ability to scale the economics. and the most economical all flash to date. the University to achieve. of the storage must be tied I know that the University has a lot of the Dell products today. if the tower of Pisa and let's hope that the it's been a pleasure talking to you the digital event experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaurizioPERSON

0.99+

Lisa MartinPERSON

0.99+

KaushikPERSON

0.99+

Kaushik GhoshPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

University of PisaORGANIZATION

0.99+

Maurizio DaviniPERSON

0.99+

LisaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DellORGANIZATION

0.99+

PisaLOCATION

0.99+

20 departmentsQUANTITY

0.99+

last yearDATE

0.99+

3000 facultyQUANTITY

0.99+

May 2021DATE

0.99+

two guestsQUANTITY

0.99+

ItalyLOCATION

0.99+

OneQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Power Scale F 900COMMERCIAL_ITEM

0.99+

IsilonORGANIZATION

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.98+

three yearsQUANTITY

0.98+

F900COMMERCIAL_ITEM

0.96+

pandemicEVENT

0.96+

University of PisaORGANIZATION

0.95+

about 50,000QUANTITY

0.94+

Power MaxCOMMERCIAL_ITEM

0.93+

theCUBEORGANIZATION

0.92+

last '90sDATE

0.91+

Power EdgeCOMMERCIAL_ITEM

0.89+

Power ScaleTITLE

0.87+

Jon Siegal, Dell Technologies | CUBE Conversation 2021


 

(bright upbeat music) >> Welcome to theCUBE, our coverage of Dell Technologies World, the Digital Experience continues. I have a long-time guest coming back, joining me in the next segment here. Jon Siegal is back, the Vice President of Product Marketing at Dell Technologies. Jon, it's good to see you, welcome back to the program. >> Thanks Lisa, always great to be on. >> We last spoke about six months ago and here we are still at home. >> I know. >> But there has been no slowdown whatsoever in the last year. We were talking to you a lot about Edge last time but we're going to talk about PowerStore today. It's just coming up on its one year anniversary. You launched it right when the pandemic happened. >> That's right. >> Talk to me about what's happened in the last year with respect to PowerStore. Adoption, momentum, what's going on? >> Yeah, great, listen, what a year it's been, right? But certainly for PowerStore especially, I mean, customers and partners around the world have really embraced PowerStore, specifically really it's modern architecture. What many people may not know is this is actually the fastest ramping new architecture we've had in all of Dell's history, which is quite a history of course. And we saw 4 X quarter over quarter growth in the most recent quarter. And you know, in terms of shipments, we've shipped well over 400 petabytes of PowerStore, you know, so special thanks to lots of our customers around the world and industries like education, gaming, transportation, retail. More than 60 countries, I think 62 countries now. They include customers like Columbia Southern University, Habib Bank, Real Page, the University of Pisa and Ultra Leap, just to name a few. And to give you a sense of how truly game changing it's been in the market is that approximately 20% of the customers with PowerStore are new to Dell, new to Dell Technologies. And we've tripled the number of wins against some of our key competitors in just the last quarter as well. So look, it's been quite a year, like you said and we're not stopping there. >> Yeah, you must have to wear a neck brace from that whiplash of moving so quickly. (both laughing) But that's actually a good problem to have. >> It is. >> And curious about, is it 20% of the PowerStore customers are net new to Dell? >> Yeah. >> Interesting that you've captured that much in a very turbulent year. Any industries in particular that you see as really being transformed by the technology? >> Yeah, it's a great question. I think just like we're bringing a disruptive technology to market, there's a lot of industries out there that are disrupting themselves as well, right, and how they transform, particularly with, you know, in this new era during the pandemic. I think, I can give you a great example. One of the new capabilities of PowerStore is AppsON just for those that aren't familiar. AppsON is the ability for PowerStore to run apps directly on the appliance, good name, right? And it's thanks to a built-in VMware ESXi hypervisor. And where we've seen really good traction with AppsON, is in storage intensive applications at the edge. And that brings me to my example. And this one's in retail. And you know, of course just like every industry I think it's been up-ended in the past year. There's a large supermarket chain in northern China that is new to Dell. During the pandemic they needed to fast-track the development of a smart autonomous retail system in all their stores, so that their customers could make their purchases via smartphone app. And again, just limiting the essentially the person to person interaction during the pandemic and this required a significant increase in transaction processing to get to the store locations that they didn't have equipment for before, as well as support for big data analytics applications to understand the customer behavior that's going on in real-time. So the net result is they chose PowerStore. They were new to Dell and they deployed it in their stores and delivered a seamless shopping experience via smartphone apps. The whole shopping experience was completely revolutionized. And I think this is really a great example of again, how the innovations that are in PowerStore are enabling our customers to really rethink how they're transacting business. >> Well, enabling the supermarkets to be the edge but also in China where everything started, so much, the market dynamics are still going on, but how quickly were they able to get PowerStore up and running and facilitate that seamless smartphone shopping experience? >> It was only weeks, only weeks, weeks from beginning to getting them up to speed. I mean, we've had great coverage, great support. And again, they embraced, I mean, they happened to leverage the AppsON capabilities, so they were able to run some of their applications directly on the appliance and they were able to get that up and running very quickly. And they were already a VMware customer as well. So they were already familiar with some of the tools and the integration of the VMware. And again, that's also been a sweetspot for this particular offer. >> Okay, got it. So a lot in it's first year. You said 4 X growth, over 60 countries, 400 petabytes plus shipped, a lot of new net new customers. What is new? What are you announcing that's new and that's going to take that up even a higher level? >> That's right. We're always going to up the ante, right? We're always going to, we can't rest on our laurels for too long. Look, we're very excited to share what's new for PowerStore. And that is one of the reasons we're here of course. I can break it down into two key highlights. First is a major software update that brings more enterprise innovation, more speed, more automation in particular to both new and existing customers. And we're also excited to announce a new lower cost entry model for the PowerStore family called the PowerStore 500. And this offers an incredible amount of enterprise class storage capabilities, much of which I have talked about and will talk more about today, for the price. And the price itself is what's going to surprise some folks. It starts as low as 28,000 US street price which is pretty significant, you know, in terms of a game changer, we think, in this industry. >> So let's talk about the software update first. You've got PowerStore 2.0, happy birthday to your customers who are going to take advantage of this. >> That's right. >> Kind of talk me through what some of the technological advancements are that your customers are going to be able to leverage? >> That's a great point. Yeah, so from a software perspective I like how you said that, happy birthday, yeah so all of our, just to be clear from a software update perspective, all of our existing customers are going to get this as a simple free non-disruptive update. And this is a commitment we've had to our customers for some time. And really it's the mantra if you will, of PowerStore, which is all about ensuring that our customers can encounter our very flexible platform that will keep giving them the latest and greatest. So really a couple of things I want to highlight from PowerStore that are brand new. One is we're giving a speed boost to the entire PowerStore lineup. Customers now, existing customers, you get up to 25% faster, mixed workload performance which is incredible, right off the bat. Secondly, we're enabling our customers to take full advantage of NVME now across the data center with the option of running NVME over fiber channel. And this again requires just a simple software update and no additional hardware if they already have 32 gig capable switches and HBAs on-prem. We've also made our unique AppsON feature, which I just talked about in the China example, we've made that more powerful and with scale out. This means more aggregate power, more aggregate capacity and it makes it even more ideal now for storage intensive apps to run at the edge with PowerStore. Another capability that's been very popular with our customers is our data reduction specifically our intelligent Dido which is always on and automated. And now what it does is it enables customers to boost performance while still guaranteeing the four to one data reduction that we have, at the same time. So just to give a quick example, when the system is under extreme IO, duress if you will, it automatically prioritize that IO versus the DDUP itself and provides a 20% turbo boost if you will, of performance boost for the applications running. All this is done automatically, zero management effort, zero impact to the data reduction guarantee of four to one that we already have in place. And then the last highlight I'd like to bring up is, last but not least, is one we're really proud of is the ability for our customers to now take more cost advantage, if you will, cost effective advantage of SCM or storage class memory. PowerStore now differentiates between SCM drives and NVME drives within the same chassis. So they can use SCM as a high-performance layer, if you will with as few as one drive, right? So they don't have to populate the whole chassis, they can use just one SCM drive for cost-effectiveness, for embedded data access. And this actually helps reduce the workload latency by up to 15%. So, another great example on top of NVME that I already mentioned, of how PowerStore is leading the practical adoption of next generation technologies. >> Are you seeing with the lower cost PowerStore 500, is that an opportunity for Dell to expand into the midsize market and an opportunity for those smaller customers to be able to take advantage of this technology? >> Absolutely, yeah. So the PowerStore finder, which we're really excited about introducing does exactly what you just said, Lisa. It is going to allow us to bring PowerStore and the experience of PowerStore to a broad range of businesses, a much broader range of edge use cases as well. And we're really excited about that. It's an incredible amount of enterprise storage class performance, as I mentioned, and functionality for the price that is again, 28,000 starting. And this includes all of the enterprise software capabilities I've been talking about. The ability to cluster, four to one data reduction guarantee, anytime upgrades. And to put this in context, a single 2U appliance, the PowerStore 500 supports up to 2.4 million SQL transactions per minute. I mean, this thing packs a punch, like no other, right? And it's a great fit for stand-alone or edge deployments in virtually every industry, we've mentioned retail already also healthcare, manufacturing, education and more. It's an offering that's really ideal for any solution that requires an optimization of price/performance, small footprint and effortless automation. And I can tell you, it's not just customers that are excited about this, as you can imagine our channel partners, they can't wait to get their hands on this either. >> Was just going to ask you about the channel. >> It is going to help them reach new sets of customers that they never had before. You mentioned midsize, but also in addition to that, it's just going to open it up to all new sets of use cases as well. So I'm really excited to see the creativity from our channel partners and customers and how they adopt and use the PowerStore 500 going forward. >> Tell me about some of those new use cases that it's going to open up. We've seen so many new things in the last year and such acceleration. What are some of the new use cases that this is going to help unlock value for? >> Yeah, again, I think it's going to come down a lot to the edge in particular, as well as mid-size, it can run, again, this can run storage, intensive applications. So it's really about coming down to a price point that I think the biggest example will be mid-sized businesses that now, it's now affordable to. That they weren't able to get this enterprise class capabilities in the past more than anything else. Cause it's all the same capabilities that I've mentioned but it allows them to run all types of things. It could be, they could run, new next-generation intensive data, intensive databases. They can run VDI, they can run SQL, it does, essentially more than anything else makes existing use cases more accessible to mid-sized businesses. >> Got it, okay. So, so much momentum going on in the first year. A lot of that you're souping it up with this your new software, we talked about the new mid-size enterprise version PowerStore 500. What else can we expect from PowerStore, the rest of calendar 2021? >> Yeah, I think lots of things. So first of all we're so pleased at the amount of commitment to innovation that we've had over the past year. We're going to continue to work very closely with VMware to drive more and more innovation and enhancements with capabilities like AppsON that I talked about, and VM-ware or (indistinct) which is a key enabler for that. We're also committed to continuing to lead the industry in the adoption of modern technologies. I gave some good examples today of NVME and AppsON and SCM, storage class memory, and customers can expect that continued commitment. Look, we've designed PowerStore from the ground up to be very flexible so that it can be enhanced and improved non-disruptively. And I think we did that with this release. We proved that and no one can predict the future, clearly, it's been a crazy year. And so businesses need storage that's going to be flexible with them and grow with them and evolve with them. And customers can expect that from PowerStore. And we plan on doing just that. >> So customers can, that are interested can go direct to Dell. They can also go through your huge channel, you said, in terms of those customers that are thinking about it maybe adding to the percentage of new customers. What's your advice on them in terms of next steps? >> Yeah, next steps is, you know, I got to say this, we've done, it's crazy, we've done over 20,000 demos of PowerStore in one year, no joke. And you know, it's a new world. And so the next step is to reach out to Dell. We'd love to showcase this through a demo, give them whether it's a remote experience that way or remote proof of concept but yeah, reach out to Dell, your local rep or local channel partner and we'd love to show you what's possible more than anything else and look, we're really proud of what we've accomplished here. Just as impressive as these updates, I must say, is that in many instances, the team that brought this to market, the engineering team, they did this just like we're doing today, right? Over Zoom, remotely, while balancing life and work. So I just also want to thank the team for their commitment to delivering innovation to our customers. It hasn't wavered at all and I want to thank our top notch team. >> Right, an amazing amount of work done. You've had a very busy year and glad that you're well and healthy and been as successful with PowerStore. We can't wait to see in the next year those numbers that you shared even go up even more. Jon, thank you for joining us >> Looking forward to it. and sharing what's new with PowerStore. We appreciate your time. >> Always a pleasure, Lisa. >> Likewise >> Look forward to talking to you soon. >> Yeah >> Take care. >> For Jon Siegal, I'm Lisa Martin, you're watching theCUBE's coverage of Dell Technologies World, a Digital Experience. (slow upbeat music)

Published Date : Apr 20 2021

SUMMARY :

Jon, it's good to see you, and here we are still at home. in the last year. Talk to me about And to give you a sense of how good problem to have. by the technology? And that brings me to my example. and the integration of the VMware. and that's going to take And that is one of the happy birthday to your customers the four to one data And to put this in context, Was just going to ask it's just going to open it up that this is going to but it allows them to on in the first year. that's going to be flexible with them can go direct to Dell. the team that brought this to and glad that you're well Looking forward to it. of Dell Technologies World,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

JonPERSON

0.99+

Lisa MartinPERSON

0.99+

20%QUANTITY

0.99+

Jon SiegalPERSON

0.99+

Habib BankORGANIZATION

0.99+

DellORGANIZATION

0.99+

ChinaLOCATION

0.99+

Columbia Southern UniversityORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

32 gigQUANTITY

0.99+

last yearDATE

0.99+

Ultra LeapORGANIZATION

0.99+

Real PageORGANIZATION

0.99+

University of PisaORGANIZATION

0.99+

FirstQUANTITY

0.99+

PowerStoreORGANIZATION

0.99+

400 petabytesQUANTITY

0.99+

28,000QUANTITY

0.99+

last quarterDATE

0.99+

northern ChinaLOCATION

0.99+

next yearDATE

0.98+

over 60 countriesQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

first yearQUANTITY

0.98+

SQLTITLE

0.98+

More than 60 countriesQUANTITY

0.98+

oneQUANTITY

0.97+

up to 15%QUANTITY

0.97+

one yearQUANTITY

0.97+

over 20,000 demosQUANTITY

0.97+

up to 25%QUANTITY

0.97+

PowerStoreTITLE

0.97+

62 countriesQUANTITY

0.97+

OneQUANTITY

0.97+

fourQUANTITY

0.96+

approximately 20%QUANTITY

0.96+

SecondlyQUANTITY

0.96+

one driveQUANTITY

0.96+

firstQUANTITY

0.95+

up to 2.4 millionQUANTITY

0.95+

over 400 petabytesQUANTITY

0.94+

PowerStore 500COMMERCIAL_ITEM

0.94+

pandemicEVENT

0.93+

past yearDATE

0.93+

VMwareORGANIZATION

0.92+

one year anniversaryQUANTITY

0.91+

AppsONTITLE

0.9+

zeroQUANTITY

0.89+

rStoreTITLE

0.89+

singleQUANTITY

0.89+

Adi Krishnan & Ryan Waite | AWS Summit 2014


 

>>Hey, welcome back everyone. We're here live here in San Francisco for Amazon web services summit. This is the smaller event compared to reinvent the big conference in Vegas, which we were broadcasting live. I'm John furry, the founder's SiliconANGLE. This is the cube. Our flagship program where we go out to the events district to see live from the noise and a an Amazon show would not be complete without talking to the Amazon guys directly about what's going on under the hood. And our next guest is ADI Krishnan and Ryan Wade have run the Canisius teams. Guys, welcome to the cube. So we, Dave Vellante and I was not here unfortunately. He has another commitment but we were going Gaga over the says we'd love red shift in love with going with the data. I see glaciers really low cost options, the store stuff, but when you start adding on red shift and you know can, he says you're adding in some new features that really kind of really pointed where the market's game, which is I need to deal with real time stuff. >>I'll need to deal with a lot of data. I need to manage it effectively at a low latency across any work use case. Okay. So how the hell do you come up with an ISA? Give us the insight into how it all came together. We'd love the real time. We'd love how it's all closing the loop if you will for developer. Just take us through how it came about. What are some of the stats now post re-invent share with us will be uh, the Genesis for Canisius was trying to solve our metering problem. The metering problem inside of AWS is how do we keep track with how our customers are using our products. So every time a customer does a read out of dynamo DB or they read a file out of S3 or they do some sort of transaction with any of our products, that generates a meeting record, it's tens of millions of records per second and tens of terabytes per hour. >>So it's a big workload. And what we were trying to do is understand how to transition from being a batch oriented processing where we using large hitting clusters to process all that data to a continuous processing where we could read all of that data in real time and make decisions on that data in real time. So you basically had created an aspirin for yourself is Hey, a little pain point internally, right? Yeah. It's kind of an example of us building a product to solve some of our own problems first and then making that available to the public. Okay. So when you guys do your Amazon thing, which I've gotten to know about it a little bit, the culture there, you guys kind of break stuff, kind of the quote Zuckerberg, you guys build kind of invented that philosophy, you know stuff good. Quickly iterating fast. So you saw your own problem and then was there an aha moment like hell Dan, this is good. We can bring it out in the market. What were customers asking for at the same time was kind of a known use case. Did you bring it to the market? What happened next? >>We spend a lot of time talking to a lot of customers. I mean that was kind of the logistical, uh, we had customers from all different sorts of investigative roles. Uh, financial services, consumer online services from manufacturing conditional attic come up to us and say, we have this canonical workflow. This workflow is about getting data of all of these producers, uh, the sources of data. They didn't have a way to aggregate that data and then driving it through a variety of different crossing systems to ultimately light up different data stores. Are these data source could be native to AWS stores like S3 time would be be uh, they could be a more interesting, uh, uh, higher data warehousing services like Gretchen. But the key thing was how do we deal with all this massive amount of data that's been producing real time, ingested, reliably scale it elastically and enable continuous crossing in the data. >>Yeah, we always loved the word of last tickets. You know, a term that you guys have built your business around being elastic. You need some new means. You have a lot of flexibility and that's a key part of being agile. But I want you guys at while we're here in the queue, define Kenny SIS for the folks out there, what the hell is it? Define it for the record. Then I have some specific questions I want to ask. Uh, so Canisius is a new service for processing huge amounts of streaming data in real time. Shortens and scales elastically. So as your data volume increases or decreases the service grows with you. And so like a no JS error log or an iPhone data. This is an example of this would be example of streaming. Yeah, exactly. You can imagine that you were tailing a whole bunch of logs coming off of servers. >>You could also be watching event streams coming out of a little internet of things type devices. Um, one of our customers we're talking about here is a super cell who's capturing in gain data from their game, Pasha, the plans. So as you're playing clash of the plans, you're tapping on the screen. All of that data is captured in thesis and then processed by my super Supercell. And this is validated. I mean obviously you mentioned some of the use cases you needed of things, just a sensor network to wearable computers or whatever. Mobile phones, I'll see event data coming off machines. So you've got machine data, you've got human data, got application data. That's kind of the data sets we're seeing with Kinesis, right? Traverse set. Um, also attraction with trends like spark out of Berkeley. You seeing in memory does this kind of, is this in your wheelhouse? >>How does that all relate to, cause you guys have purpose-built SSDs now in your new ECQ instances and all this new modern gear we heard in the announcements. How does all the in-memory stuff affect the Canisius service? It's a great question. When you can imagine as Canisius is being a great service for capturing all of that data that's being generated by, you know, hundreds of thousands or millions of sources, it gets sent to Canisius where we replicated across three different availability zones. That data is then made available for applications to process those that are processing that data could be Hadoop clusters, they could be your own Kaloosas applications. And it could be a spark cluster. And so writing spark applications that are processing that data in real time is a, it's a great use case and the in memory capabilities and sparker probably ideal for being able to process data that's stored in pieces. >>Okay. So let's talk about some of the connecting the dots. So Canisius works in conjunction with what other services are you seeing that is being adopted most right now? Now see I mentioned red shift, I'm just throwing that in there. I'll see a data warehousing tool seeing a lot of business tells. So basically people are playing with data, a lot of different needs for the data. So how does connect through the stack? I think they are the number one use case we see is customers capturing all of this data and then archiving all of it right away to S3 just been difficult to capture everything. Right. And even if you did, you probably could keep it for a little while and then you had to get, do you have to get rid of it? But, uh, with the, the prices for us three being so low and Canisius being so easy to capture tiny rights, these little tiny tales of log data, they're coming out of your servers are little bits of data coming off of mobile devices capture all of that, aggregate it and put it in S3. >>That's the number one use case we see as customers are becoming more sophisticated with using Kinesis, they then begin to run real time dashboards on top of Kinesis data. So you could, there's all the data into dynamo DB where you could push all that data into even something like Redshift and run analytics on top of that. The final cases, people in doing real time decision making based on PISA. So once you've got all this data coming in, putting it into a dynamo DB or Redshift or EMR, you then process it and then start making decisions, automated decisions that take advantage of them. So essentially you're taking STEM the life life cycle of kind of like man walking the wreck at some point. Right? It's like they start small, they store the data, usually probably a developer problem just in efficiencies. Log file management is a disaster. >>We know it's a pain in the butt for developers. So step one is solve that pain triage, that next step is okay I'm dashboard, I'm starting to learn about the data and then three is more advanced like real time decision making. So like now that I've got the data coming in in real time and not going to act. Yeah, so when I want to bring that up, this is more of a theoretical kind of orthogonal conversation is where you guys are basically doing is we look, we like that Silicon angles like the point out to kind of what's weird in the market and kind of why it's important and that is the data things. There's something to do with data. It really points to a new developer. Fair enough. And I want to give you guys comments on this. No one's really come out yet and said here's a development kit or development environment for data. >>You see companies like factual doing some amazing stuff. I don't know if you know those guys just met with um, new Relic. They launched kind of this data off the application. So you seeing, you seeing what you guys are doing, you can imagine that now the developer framework is, Hey I had to deal with as a resource constraint so you haven't seen it. So I want to get your thoughts. Do you see that happening in that direction? How will data be presented to developers? Is it going to be abstracted away? Will there be development environments? Is it matter? And just organizing the data, what's your vision around? So >>that's really good person because we've got customers that come up to us and say I want to mail real time data with batch processing or I have my data that is right now lots of little data and now I want to go ahead and aggregate it to make sense of it over a longer period of time. And there's a lot of theory around how data should be modeled, how we should be represented. But the way we are taking the evolution set is really learning from our customers and customers come up and say we need the ability to capture data quickly. But then what I want to do is apply my existing Hadoop stack and tools to my data because then you won't understand that. And as a response to that classroom demand, uh, was the EMR connect. Somehow customers can use say hi queries or cascading scripts and apply that to real time data. That can means is ingesting. Another response to pass was, was the, that some customers that would really liked the, the, the stream processing construct a storm. And so on, our step over there was to say, okay, we shipped the Canisius storm spout, so now customers can bring their choice of matter Dame in and mail back with Canisius. So I think the, the short answer there right now is that, >>you know, it's crazy. It's really early, right? I would also add like, like just with, uh, as with have you, there's so many different ways to process data in the real time space. They're going to be so many different ways that people process that data. There's never going to be a single tool that you use for processing real time data. It's a lot of tools and it adapts to the way that people think about data. So this also brings us back to the dev ops culture, which you guys essentially founded Amazon early in the early days and you know I gotta give you credit for that and you guys deserve it. Dev ops was really about building from the ground good cloud, which post.com bubble. Really the thing about that's Amazon's, you've lived your own, your own world, right? To survive with lesson and help other developers. >>But that brings up a good point, right? So okay, data's early and I'm now going to be advancing slowly. Can there be a single architecture for dealing with data or is it going to be specialized systems? You're seeing Oracle made some mates look probably engineered systems. You seeing any grade stacks work? What's the take on the data equation? I'm not just going to do because of the data out the internet of things data. What is the refer architecture right now? I think what we're going to see is a set of patterns that we can do alone and people will be using those patterns for doing particular types of processing. Uh, one of the other teams that I run at is the fraud detection team and we use a set of machine learning algorithms to be able to continuously monitor usage of the cloud, to identify patterns of behavior which are indicative of fraud. >>Um, that kind of pattern of use is very different than I'm doing clickstream analysis and the kind of pattern that we use for doing that would naturally be different. I think we're going to see a canonical set of patterns. I don't know if we're going to see a very particular set of technologies. Yeah. So that brings us back to the dev ops things. So how do I want to get your take on this? Because dev ops is really about efficiencies. Software guys don't want to be hardware guys the other day. That's how it all started. I don't want to provision the network. I don't want a stack of servers. I just want to push code and then you guys have crazy, really easy ways to make that completely transparent. But now you joke about composite application development. You're saying, Hey, I'm gonna have an EMR over here for my head cluster and then a deal with, so maybe fraud detection stream data, it's going to be a different system than a Duke or could be a relational database. >>Now I need to basically composite we build an app. That's what we're talking about here. Composite construction resource. Is that kind of the new dev ops 2.0 maybe. So we'll try to tease out here's what's next after dev ops. I mean dev ops really means there's no operations. And how does a developer deal with these kinds of complex environments like fraud detection, maybe application here, a container for this bass. So is it going to be fully composite? Well, I don't know if we run the full circuit with the dev ops development models. It's a great model. It's worked really well for a number of startups. However, making it easy to be able to plug different components together. I get just a great idea. So, like as ADI mentioned just a moment ago, our ability to take data and Kinesis and pump that right into a elastic MapReduce. >>It's great. And it makes it easy for people to use their existing applications with a new system like pieces that kind of composing of applications. It's worth well for a long time. And I think you're just going to see us continuing to do more and more of that kind of work. So I'm going to ask both of you guys a question. Give me an example of when something broke internally. This is not in a sound, John, I don't go negative here, but you got your, part of your culture is, is to move fast, iterate. So when you, these important projects like Canisius give me an example of like, that was a helpful way in which I stumbled. What did you learn? What was the key pain points of the evolution of getting it out the door and what key things did you learn from media success or kind of a speed bump or a failure along the way? >>Well, I think, uh, I think one of the first things we learned right after we chipped and we were still in a limited previous and we were trying it out with our customers who are getting feedback and learning with, uh, what they wanted to change in the product. Uh, one of the first things that we learned was that the, uh, the amount of time that it took to put data into Canisius and receive a return code was too high for a lot of our customers. It was probably around a hundred milliseconds for the, that you put the data in to the time that we've replicated that data across multiple availability zones and return success to the client. Uh, that was, that was a moment for us to really think about what it meant to enable people to be pushing tons of data into pieces. And we went back a hundred milliseconds. >>That's low, no bad. But right away we went back and doubled our efforts and we came back in around, you know, somewhere between 30 and 40 milliseconds depending on your network connectivity. Hey, the old days, that was, that was the spitting disc of the art. 10, 20 Meg art. It's got a VC. That's right. Those Lotus files out, you know, seeing those windows files. So you guys improve performance. So that's an example. You guys, what's the biggest surprise that you guys have seen from a customer use case that was kind of like, wow, this is really something that we didn't see happening on a, on a larger scale that caught me by surprise. >>Uh, I is in use case it'd be a corner use case. Like, well, I'd never figured that, you know, I would say like, uh, some of the one thing that actually surprised us was how common it is for people to have multiple applications reading out of the same stream. Uh, like again, the basic use case for so many customers is I'm going to take all this data and I'm just going to throw it into S3. Uh, and we kind of envisioned that there might be a couple of different applications reading data of that stream. We have a couple of customers that actually have uh, as many as three applications that are reading that stream of events that are coming out of Kinesis. Each one of them is reading from a different position in the stream. They're able to read from different locations, process that data differently. >>But uh, but the idea that cleanses is so different from traditional queuing systems and yet provides, uh, a real time emotionality and that multiple applications can read from it. That was, that was a bit of a versa. The number one use case right now, who's adopting, can you sit there, watch folks watching out there, did the Canisius brain trust right here with an Amazon? Um, what are the killer no brainer scenarios that you're seeing on the uptake side right now that people should be aware of that they haven't really kicked the tires on Kinesis where they should be? What should they be looking at? I think the number one use case is log and ingestion. So like I'm tailing logs that are coming off of web servers, my application servers, uh, data that's just being produced continuously who grab all that data. And very easily put it into something like us through the beauty of that model is I now have all the logo that I got it off of all of my hosts as quickly as possible and I can go do log nights later if there's a problem that is the slam dunk use case for using crisis. >>Uh, there are other scenarios that are beginning to emerge as well. I don't know audio if you want to talk, that's many interesting and lots of customers are doing so already is emit data from all sorts of devices. So this is, these devices are not just your smartphones and tablets that are practically food computing machines, but also seemingly low power, seemingly dumb devices. And the design remains the same. There are millions of these out there and having the ability to capture that in a day produce in real time is, you know, I think just, uh, just to highlight that, one of things I'm hearing on the cube interviews, all the customers we talk to is the number one thing is I just got to scroll the date. I know what I want to do with it yet. Now that's a practice that's a hangover from the BI data warehouse in business of just store from a compliance reasons now, which is basically like, that's like laser as far as I'm concerned. >>Traditional business intelligence systems are like their version of Galatians chipped out somewhere and give me those reports. Five weeks later they come back. But that's different. Now you see people store that data and they realize that I need to touch it faster. I don't know yet when, that's why I'm teasing out this whole development 2.0 model because I'm just seeing more and more people want the data hanging around but not fully parked out in Malaysia or some sort of, you know, compliance storage. So there's, you know, I think, I think I kind of understand where you're going. There's a, I'm going to use a model for like how we used to do BI analytics and our own internal data warehouse. I also run the data warehouse for AWS. Um, and the classic BI model there is somebody asks a question, we go off and we just do some analysis and if it's a question that we're going to ask repeatedly, we don't, you know, a special fact table or a dimensional view or something to be able to grind through that particular view and do it very quickly. >>A Kunis is offers a different kind of data processing model, which is I'm collecting all of the data and make it easy to capture everything, but now I can start doing things like, Oh, there's, there's certain pieces of data that I want to respond to you quickly. Just like we would create dimensional views that would give us access to particular sets of data and very quick pace. We can now also respond to when those events are generated very quickly. Well, you guys are the young guns in the industry now. I'm a little bit older and the gray hair showing, we actually use the word data processing back in the day. The data processing that the DP department or the MIS department, if you remember those those days, MIS was the management information. Are we going back to those terms? I mean we're looking at look what's happening. >>Is it the software mainframe in the cloud? I mean these are some of the words you're using. Just data processing data pipeline. Well, I my S that's my work, but I mean we're back to those old school stuff but different, well and I think those kinds of very generic terms make a lot of sense for what we're doing is we, especially as we move into these brand new spaces like wow, what do I do with real time data? Like real time data processing is kind of the third type of big data processing or data warehousing was the first time I know what my data looks like. I've created indices like a pre computation of the data, uh, uh, Hadoop clusters and the MapReduce model was kind of the second wave of big data processing and realtime processing I think will be the third way. I think our process, well, I'm getting the hook here, but I got to just say, you guys are doing an amazing job. >>We're big fans of Amazon. I always say that, uh, you know, it was very rare in the history the world. We look at innovations like the printing press, the Wright brothers discover, you know, flying and things like we, Amazon with cloud. You guys have done something that's pretty amazing. But what I find fascinating is it's very rare to see a company that's commoditizing and disrupting and innovating at the same time. And it's really a unique value proposition and the competition is responding. IBM, Google. So you guys have a lot of targets painted on your back by a lot of big players. So, uh, one congratulations on your success, which means that you, you know, you're not going to go in the open field and fight the, the British if they said use the American revolution analogy. You've got to continue to compete. So what's your view of that? >>I mean, and I'm sure you don't talk about competition. You'd probably told him not to talk about it, but I mean, you got to know that all the guns are on you right now. The big guys are putting up the sea wall for your wave of innovation. How do you guys deal with that? It's just cause it's not like we, we ignore our competitors but we obsess about our customers, right? Like it's just constantly looking for what are people trying to do and how can we help them and can seem like a very simple strategy. But the strategy is built with people want and we get a lot of great feedback on how we can make our products better. And it certainly will force you to up your game when you have the competition citing on you. You've got more focused on the customer, which is cool. >>But like you guys kind of aware of like games on, I mean Amazon is at any given a little pep talk, Hey, game is on guys. Let's rock and roll. Right? You guys are aware, right? I think we're totally wearing, I think we're actually sometimes a little surprised at how long it's taken to our competitors to kind of get into this industry with us. So, uh, again, as Andy talked about earlier today, we've had eight years in the cloud computing market. It's been a great eight years and we have a lot of work to do, a lot of stuff that we're going to be almost ready for middle school. Um, final final question for you guys and give you the final word here. Share the photos on the last word is why is this show so important, right this point in time in this market. Why is this environment of the thousands of people that are here learning about Amazon, why, what should they know about why this is such an important advance? I think our summits are a great opportunity for us to share with customers how to use our AWS services. Learn firsthand from not only our hands on labs, but also our partners that are providing information about how they use AWS resources. It's, it's a great opportunity to meet a lot of people that are taking advantage of the cloud computing wave and see how to use the cloud most effectively. >>It's a great time to be in the cloud right now and the Olin's amazing services coming up. There's no better mind now of people coming together and so that's probably as good reasons. Then you guys are doing a great job disrupting change in the future. Modern enterprise and modern business, modern applications. Excited to watch it. If you guys keep focusing on your customer, but that customer base, you keep up the pace that's sick. That question, can you finish the race? That's what I always tell Dave a lot. They, I know Jay's watching Dave. Shout out to Dave Volante, who's on the mobile app right now is traveling. Guys, thanks for coming inside. Can he says great stuff. Closing the loop real time. Amazon really building it out. Thanks for coming on. If you'd be right back with our next guest after this short break. Thank you.

Published Date : Mar 26 2014

SUMMARY :

the store stuff, but when you start adding on red shift and you know can, he says you're adding in some new features So how the hell do you come up with an ISA? the culture there, you guys kind of break stuff, kind of the quote Zuckerberg, you guys build kind of invented that philosophy, I mean that was kind of the logistical, You know, a term that you guys have built your business around being elastic. That's kind of the data sets we're seeing with Kinesis, of that data that's being generated by, you know, hundreds of thousands or millions of sources, it gets with what other services are you seeing that is being adopted most right now? That's the number one use case we see as customers are becoming more sophisticated with using Kinesis, And I want to give you guys comments on this. I don't know if you know those guys just met with But the way we are taking the evolution set is So this also brings us back to the dev ops culture, which you guys essentially founded Amazon early in the early days So okay, data's early and I'm now going to be I just want to push code and then you So is it going to be fully composite? So I'm going to ask both of you guys a question. Uh, one of the first things that we learned So you guys improve performance. of the one thing that actually surprised us was how common it is for people to have multiple applications So like I'm tailing logs that are coming off of web capture that in a day produce in real time is, you know, I think just, uh, just to highlight that, So there's, you know, I think, I think I kind of understand where you're going. The data processing that the DP department or the MIS department, if you remember those those days, you guys are doing an amazing job. So you guys have a lot of targets painted on your back by a lot of big players. And it certainly will force you to up your game when But like you guys kind of aware of like games on, I mean Amazon is If you guys keep focusing on your customer, but that customer base, you keep up the pace that's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

ZuckerbergPERSON

0.99+

GoogleORGANIZATION

0.99+

Dave VolantePERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Ryan WadePERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

MalaysiaLOCATION

0.99+

eight yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JayPERSON

0.99+

San FranciscoLOCATION

0.99+

Adi KrishnanPERSON

0.99+

VegasLOCATION

0.99+

BerkeleyLOCATION

0.99+

Ryan WaitePERSON

0.99+

ADI KrishnanPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

hundreds of thousandsQUANTITY

0.99+

LotusTITLE

0.99+

Five weeks laterDATE

0.99+

40 millisecondsQUANTITY

0.99+

DanPERSON

0.99+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

third wayQUANTITY

0.98+

three applicationsQUANTITY

0.98+

John furryPERSON

0.98+

S3TITLE

0.98+

CanisiusORGANIZATION

0.97+

threeQUANTITY

0.97+

30QUANTITY

0.97+

first timeQUANTITY

0.97+

Each oneQUANTITY

0.96+

single toolQUANTITY

0.95+

around a hundred millisecondsQUANTITY

0.95+

millions of sourcesQUANTITY

0.95+

AWS Summit 2014EVENT

0.95+

step oneQUANTITY

0.94+

RelicORGANIZATION

0.94+

GagaPERSON

0.93+

third typeQUANTITY

0.91+

earlier todayDATE

0.91+

S3 timeTITLE

0.9+

AmericanOTHER

0.9+

a hundred millisecondsQUANTITY

0.89+

OlinPERSON

0.89+

KunisORGANIZATION

0.89+

first thingsQUANTITY

0.89+

ECQTITLE

0.87+

DukeORGANIZATION

0.87+

ADIORGANIZATION

0.86+

Kenny SISPERSON

0.85+

tens of terabytes perQUANTITY

0.85+

RedshiftTITLE

0.85+

firstQUANTITY

0.85+

KinesisORGANIZATION

0.85+

SiliconANGLEORGANIZATION

0.81+

tons of dataQUANTITY

0.8+

single architectureQUANTITY

0.77+

one thingQUANTITY

0.77+

thousands of peopleQUANTITY

0.76+

PISATITLE

0.73+

second waveEVENT

0.73+

tens of millions of records per secondQUANTITY

0.73+

agileTITLE

0.73+

windowsTITLE

0.69+