Image Title

Search Results for National Laboratory:

Armando Acosta, Dell Technologies and Matt Leininger, Lawrence Livermore National Laboratory


 

(upbeat music) >> We are back, approaching the finish line here at Supercomputing 22, our last interview of the day, our last interview of the show. And I have to say Dave Nicholson, my co-host, My name is Paul Gillin. I've been attending trade shows for 40 years Dave, I've never been to one like this. The type of people who are here, the type of problems they're solving, what they talk about, the trade shows are typically, they're so speeds and feeds. They're so financial, they're so ROI, they all sound the same after a while. This is truly a different event. Do you get that sense? >> A hundred percent. Now, I've been attending trade shows for 10 years since I was 19, in other words, so I don't have necessarily your depth. No, but seriously, Paul, totally, completely, completely different than any other conference. First of all, there's the absolute allure of looking at the latest and greatest, coolest stuff. I mean, when you have NASA lecturing on things when you have Lawrence Livermore Labs that we're going to be talking to here in a second it's a completely different story. You have all of the academics you have students who are in competition and also interviewing with organizations. It's phenomenal. I've had chills a lot this week. >> And I guess our last two guests sort of represent that cross section. Armando Acosta, director of HPC Solutions, High Performance Solutions at Dell. And Matt Leininger, who is the HPC Strategist at Lawrence Livermore National Laboratory. Now, there is perhaps, I don't know you can correct me on this, but perhaps no institution in the world that uses more computing cycles than Lawrence Livermore National Laboratory and is always on the leading edge of what's going on in Supercomputing. And so we want to talk to both of you about that. Thank you. Thank you for joining us today. >> Sure, glad to be here. >> For having us. >> Let's start with you, Armando. Well, let's talk about the juxtaposition of the two of you. I would not have thought of LLNL as being a Dell reference account in the past. Tell us about the background of your relationship and what you're providing to the laboratory. >> Yeah, so we're really excited to be working with Lawrence Livermore, working with Matt. But actually this process started about two years ago. So we started looking at essentially what was coming down the pipeline. You know, what were the customer requirements. What did we need in order to make Matt successful. And so the beauty of this project is that we've been talking about this for two years, and now it's finally coming to fruition. And now we're actually delivering systems and delivering racks of systems. But what I really appreciate is Matt coming to us, us working together for two years and really trying to understand what are the requirements, what's the schedule, what do we need to hit in order to make them successful >> At Lawrence Livermore, what drives your computing requirements I guess? You're working on some very, very big problems but a lot of very complex problems. How do you decide what you need to procure to address them? >> Well, that's a difficult challenge. I mean, our mission is a national security mission dealing with making sure that we do our part to provide the high performance computing capabilities to the US Department of Energy's National Nuclear Security Administration. We do that through the Advanced Simulation computing program. Its goal is to provide that computing power to make sure that the US nuclear rep of the stockpile is safe, secure, and effective. So how we go about doing that? There's a lot of work involved. We have multiple platform lines that we accomplish that goal with. One of them is the advanced technology systems. Those are the ones you've heard about a lot, they're pushing towards exit scale, the GPU technologies incorporated into those. We also have a second line, a platform line, called the Commodity Technology Systems. That's where right now we're partnering with Dell on the latest generation of those. Those systems are a little more conservative, they're right now CPU only driven but they're also intended to be the everyday work horses. So those are the first systems our users get on. It's very easy for them to get their applications up and running. They're the first things they use usually on a day to day basis. They run a lot of small to medium size jobs that you need to do to figure out how to most effectively use what workloads you need to move to the even larger systems to accomplish our mission goals. >> The workhorses. >> Yeah. >> What have you seen here these last few days of the show, what excites you? What are the most interesting things you've seen? >> There's all kinds of things that are interesting. Probably most interesting ones I can't talk about in public, unfortunately, 'cause of NDA agreements, of course. But it's always exciting to be here at Supercomputing. It's always exciting to see the products that we've been working with industry and co-designing with them on for, you know, several years before the public actually sees them. That's always an exciting part of the conference as well specifically with CTS-2, it's exciting. As was mentioned before, I've been working with Dell for nearly two years on this, but the systems first started being delivered this past August. And so we're just taking the initial deliveries of those. We've deployed, you know, roughly about 1600 nodes now but that'll ramp up to over 6,000 nodes over the next three or four months. >> So how does this work intersect with Sandia and Los Alamos? Explain to us the relationship there. >> Right, so those three laboratories are the laboratories under the National Nuclear Security Administration. We partner together on CTS. So the architectures, as you were asking, how do we define these things, it's the labs coming together. Those three laboratories we define what we need for that architecture. We have a joint procurement that is run out of Livermore but then the systems are deployed at all three laboratories. And then they serve the programs that I mentioned for each laboratory as well. >> I've worked in this space for a very long time you know I've worked with agencies where the closest I got to anything they were actually doing was the sort of guest suite outside the secure area. And sometimes there are challenges when you're communicating, it's like you have a partner like Dell who has all of these things to offer, all of these ideas. You have requirements, but maybe you can't share 100% of what you need to do. How do you navigate that? Who makes the decision about what can be revealed in these conversations? You talk about NDA in terms of what's been shared with you, you may be limited in terms of what you can share with vendors. Does that cause inefficiency? >> To some degree. I mean, we do a good job within the NSA of understanding what our applications need and then mapping that to technical requirements that we can talk about with vendors. We also have kind of in between that we've done this for many years. A recent example is of course with the exit scale computing program and some things it's doing creating proxy apps or mini apps that are smaller versions of some of the things that we are important to us. Some application areas are important to us, hydrodynamics, material science, things like that. And so we can collaborate with vendors on those proxy apps to co-design systems and tweak the architectures. In fact, we've done a little bit that with CTS-2, not as much in CTS as maybe in the ATS platforms but that kind of general idea of how we collaborate through these proxy applications is something we've used across platforms. >> Now is Dell one of your co-design partners? >> In CTS-2 absolutely, yep. >> And how, what aspects of CTS-2 are you working on with Dell? >> Well, the architecture itself was the first, you know thing we worked with them on, we had a procurement come out, you know they bid an architecture on that. We had worked with them, you know but previously on our requirements, understanding what our requirements are. But that architecture today is based on the fourth generation Intel Xeon that you've heard a lot about at the conference. We are one of the first customers to get those systems in. All the systems are interconnected together with the Cornell Network's Omni-Path Network that we've used before and are very excited about as well. And we build up from there. The systems get integrated in by the operations teams at the laboratory. They get integrated into our production computing environment. Dell is really responsible, you know for designing these systems and delivering to the laboratories. The laboratories then work with Dell. We have a software stack that we provide on top of that called TOSS, for Tri-Lab Operating System. It's based on Redhead Enterprise Linux. But the goal there is that it allows us, a common user environment, a common simulation environment across not only CTS-2, but maybe older systems we have and even the larger systems that we'll be deploying as well. So from a user perspective they see a common user interface, a common environment across all the different platforms that they use at Livermore and the other laboratories. >> And Armando, what does Dell get out of the co-design arrangement with the lab? >> Well, we get to make sure that they're successful. But the other big thing that we want to do, is typically when you think about Dell and HPC, a lot of people don't make that connection together. And so what we're trying to do is make sure that, you know they know that, hey, whether you're a work group customer at the smallest end or a super computer customer at the highest end, Dell wants to make sure that we have the right setup portfolio to match any needs across this. But what we were really excited about this, this is kind of our, you know big CTS-2 first thing we've done together. And so, you know, hopefully this has been successful. We've made Matt happy and we look forward to the future what we can do with bigger and bigger things. >> So will the labs be okay with Dell coming up with a marketing campaign that said something like, "We can't confirm that alien technology is being reverse engineered." >> Yeah, that would fly. >> I mean that would be right, right? And I have to ask you the question directly and the way you can answer it is by smiling like you're thinking, what a stupid question. Are you reverse engineering alien technology at the labs? >> Yeah, you'd have to suck the PR office. >> Okay, okay. (all laughing) >> Good answer. >> No, but it is fascinating because to a degree it's like you could say, yeah, we're working together but if you really want to dig into it, it's like, "Well I kind of can't tell you exactly how some of this stuff is." Do you consider anything that you do from a technology perspective, not what you're doing with it, but the actual stack, do you try to design proprietary things into the stack or do you say, "No, no, no, we're going to go with standards and then what we do with it is proprietary and secret."? >> Yeah, it's more the latter. >> Is the latter? Yeah, yeah, yeah. So you're not going to try to reverse engineer the industry? >> No, no. We want the solutions that we develop to enhance the industry to be able to apply to a broader market so that we can, you know, gain from the volume of that market, the lower cost that they would enable, right? If we go off and develop more and more customized solutions that can be extraordinarily expensive. And so we we're really looking to leverage the wider market, but do what we can to influence that, to develop key technologies that we and others need that can enable us in the high forms computing space. >> We were talking with Satish Iyer from Dell earlier about validated designs, Dell's reference designs for for pharma and for manufacturing, in HPC are you seeing that HPC, Armando, and is coming together traditionally and more of an academic research discipline beginning to come together with commercial applications? And are these two markets beginning to blend? >> Yeah, I mean so here's what's happening, is you have this convergence of HPC, AI and data analytics. And so when you have that combination of those three workloads they're applicable across many vertical markets, right? Whether it's financial services, whether it's life science, government and research. But what's interesting, and Matt won't brag about, but a lot of stuff that happens in the DoE labs trickles down to the enterprise space, trickles down to the commercial space because these guys know how to do it at scale, they know how to do it efficiently and they know how to hit the mark. And so a lot of customers say, "Hey we want what CTS-2 does," right? And so it's very interesting. The way I love it is their process the way they do the RFP process. Matt talked about the benchmarks and helping us understand, hey here's kind of the mark you have to hit. And then at the same time, you know if we make them successful then obviously it's better for all of us, right? You know, I want to secure nuclear stock pile so I hope everybody else does as well. >> The software stack you mentioned, I think Tia? >> TOSS. >> TOSS. >> Yeah. >> How did that come about? Why did you feel the need to develop your own software stack? >> It originated back, you know, even 20 years ago when we first started building Linux clusters when that was a crazy idea. Livermore and other laboratories were really the first to start doing that and then push them to larger and larger scales. And it was key to have Linux running on that at the time. And so we had the. >> So 20 years ago you knew you wanted to run on Linux? >> Was 20 years ago, yeah, yeah. And we started doing that but we needed a way to have a version of Linux that we could partner with someone on that would do, you know, the support, you know, just like you get from an EoS vendor, right? Security support and other things. But then layer on top of that, all the HPC stuff you need either to run the system, to set up the system, to support our user base. And that evolved into to TOSS which is the Tri-Lab Operating System. Now it's based on the latest version of Redhead Enterprise Linux, as I mentioned before, with all the other HPC magic, so to speak and all that HPC magic is open source things. It's not stuff, it may be things that we develop but it's nothing closed source. So all that's there we run it across all these different environments as I mentioned before. And it really originated back in the early days of, you know, Beowulf clusters, Linux clusters, as just needing something that we can use to run on multiple systems and start creating that common environment at Livermore and then eventually the other laboratories. >> How is a company like Dell, able to benefit from the open source work that's coming out of the labs? >> Well, when you look at the open source, I mean open source is good for everybody, right? Because if you make a open source tool available then people start essentially using that tool. And so if we can make that open source tool more robust and get more people using it, it gets more enterprise ready. And so with that, you know, we're all about open source we're all about standards and really about raising all boats 'cause that's what open source is all about. >> And with that, we are out of time. This is our 28th interview of SC22 and you're taking us out on a high note. Armando Acosta, director of HPC Solutions at Dell. Matt Leininger, HPC Strategist, Lawrence Livermore National Laboratories. Great discussion. Hopefully it was a good show for you. Fascinating show for us and thanks for being with us today. >> Thank you very much. >> Thank you for having us >> Dave it's been a pleasure. >> Absolutely. >> Hope we'll be back next year. >> Can't believe, went by fast. Absolutely at SC23. >> We hope you'll be back next year. This is Paul Gillin. That's a wrap, with Dave Nicholson for theCUBE. See here in next time. (soft upbear music)

Published Date : Nov 17 2022

SUMMARY :

And I have to say Dave You have all of the academics and is always on the leading edge about the juxtaposition of the two of you. And so the beauty of this project How do you decide what you need that you need to do but the systems first Explain to us the relationship there. So the architectures, as you were asking, 100% of what you need to do. And so we can collaborate with and the other laboratories. And so, you know, hopefully that said something like, And I have to ask you and then what we do with it reverse engineer the industry? so that we can, you know, gain And so when you have that combination running on that at the time. all the HPC stuff you need And so with that, you know, and thanks for being with us today. Absolutely at SC23. with Dave Nicholson for theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt LeiningerPERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

Paul GillinPERSON

0.99+

National Nuclear Security AdministrationORGANIZATION

0.99+

Armando AcostaPERSON

0.99+

Cornell NetworkORGANIZATION

0.99+

DellORGANIZATION

0.99+

MattPERSON

0.99+

CTS-2TITLE

0.99+

US Department of EnergyORGANIZATION

0.99+

DavePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

two yearsQUANTITY

0.99+

next yearDATE

0.99+

Lawrence LivermoreORGANIZATION

0.99+

100%QUANTITY

0.99+

CTSTITLE

0.99+

Dell TechnologiesORGANIZATION

0.99+

PaulPERSON

0.99+

LinuxTITLE

0.99+

NASAORGANIZATION

0.99+

HPC SolutionsORGANIZATION

0.99+

bothQUANTITY

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

todayDATE

0.99+

Los AlamosORGANIZATION

0.99+

OneQUANTITY

0.99+

Lawrence Livermore National LaboratoryORGANIZATION

0.99+

ArmandoORGANIZATION

0.99+

each laboratoryQUANTITY

0.99+

second lineQUANTITY

0.99+

over 6,000 nodesQUANTITY

0.99+

20 years agoDATE

0.98+

three laboratoriesQUANTITY

0.98+

28th interviewQUANTITY

0.98+

Lawrence Livermore National LaboratoriesORGANIZATION

0.98+

threeQUANTITY

0.98+

firstQUANTITY

0.98+

Tri-LabORGANIZATION

0.98+

SandiaORGANIZATION

0.97+

oneQUANTITY

0.97+

FirstQUANTITY

0.97+

two marketsQUANTITY

0.97+

SupercomputingORGANIZATION

0.96+

first systemsQUANTITY

0.96+

fourth generationQUANTITY

0.96+

this weekDATE

0.96+

LivermoreORGANIZATION

0.96+

Omni-Path NetworkORGANIZATION

0.95+

about 1600 nodesQUANTITY

0.95+

Lawrence Livermore National LaboratoryORGANIZATION

0.94+

LLNLORGANIZATION

0.93+

NDAORGANIZATION

0.93+

Robin Goldstone, Lawrence Livermore National Laboratory | Red Hat Summit 2019


 

>> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen brought to you by bread. Welcome back a few, but our way Our red have some twenty nineteen >> center along with Sue Mittleman. I'm John Walls were now joined by Robin Goldstone, who's HBC solution architect at the Lawrence Livermore National Laboratory. Hello, Robin >> Harrier. Good to see you. I >> saw you on the Keystone States this morning. Fascinating presentation, I thought. First off for the viewers at home who might not be too familiar with the laboratory If you could please just give it that thirty thousand foot level of just what kind of national security work you're involved with. >> Sure. So yes, indeed. We are a national security lab. And you know, first and foremost, our mission is assuring the safety, security reliability of our nuclear weapons stockpile. And there's a lot to that mission. But we also have broader national security mission. We work on counterterrorism and nonproliferation, a lot of of cyber security kinds of things. And but even just general science. We're doing things with precision medicine and and just all all sorts >> of interesting technology. Fascinating >> Es eso, Robin, You know so much and i t you know, the buzzword. The vast months years has been scaled on. We talk about what public loud people are doing. It's labs like yours have been challenged. Challenge with scale in many other ways, especially performance is something that you know, usually at the forefront of where things are you talked about in the keynote this morning. Sierra is the latest generation supercomputer number two, you know, supercomputer. So you know, I don't know how many people understand the petaflop one hundred twenty five flops and the like, but tell us a little bit about, you know, kind of the why and the what of that, >> right? So So Sierra's a supercomputer. And what's unique about these systems is that we're solving. There's lots of systems that network together. Maybe you're bigger number of servers than us, but we're doing scientific simulation, and that kind of computing requires a level of parallelism and very tightly coupled. So all the servers are running a piece of the problem. They all have to sort of operate together. If any one of them is running slow, it makes the whole thing goes slow. So it's really this tightly couple nature of super computers that make things really challenging. You know, we talked about performance. If if one servers just running slow for some reason, you know everything else is going to be affected by that. So we really do care about performance. And we really do care about just every little piece of the hardware you know, performing as it should. So So I >> think in national security, nuclear stockpiles. Um I mean, there is nothing more important, obviously, than the safety and security of the American people were at the center of that. Right? You're open source, right? You know, how does that work? How does that? Because as much trust and faith and confidence we have in the open source community. This is an extremely important responsibility that's being consigned more less to this open source community. >> Sure. You know, at first, people do have that feeling that we should be running some secret sauce. I mean, our applications themselves or secret. But when it comes to the system software and all the software around the applications, I mean, open source makes perfect sense. I mean, we started out running really closed source solutions in some cases, the perp. The hardware itself was really proprietary. And, of course, the vendors who made the hardware proprietary. They wanted their software to be proprietary. But I think most people can resonate when you buy a piece of software and the vendor tells you it's it's great. It's going to do everything you needed to do and trust us, right? Okay, But at our scale, it often doesn't work the way it's It's supposed to work. They've never tested it. Our skill. And when it breaks, now they have to fix. They're the only ones that can fix it. And in some cases we found it wasn't in the vendors decided. You know what? No one else has one quite like yours. And you know, it's a lot of work to make it work for you. So we're just not going to fix and you can't wait, right? And so open source is just the opposite of that, right? I mean, we have all that visibility in that software. If it doesn't work for our needs, we can make it work for our needs, and then we can give it back to the community. Because even though people are doing things that the scale that we are today, Ah, lot of the things that we're doing really do trickle down and can be used by a lot of other people. >> But it's something really important because, as you said, you used to be and I was like, OK, the Cray supercomputer is what we know, You know, let's use proprietary interfaces and I need the highest speed and therefore it's not the general purpose stuff. You moved X eighty six. Lennox is something that's been in the shower computers. Why? But it's a finely tuned version there. Let's get you know, the duct tape and baling wire. And don't breathe on it once you get it running. You're running well today and you talk a little bit about the journey with Roland. You know, now on the Super Computers, >> right? So again, there's always been this sort of proprietary, really high end supercomputing. But about in the late nineteen nineties, early two thousand, that's when we started building these these commodity clusters. You know, at the time, I think Beta Wolf was the terminology for that. But, you know, basically looking at how we could take these basic off the shelf servers and make them work for our applications and trying to take advantage of a CZ much commodity technologies we can, because we didn't want to re invent anything. We want to use as much as possible. And so we've really written that curve. And initially it was just red hat. Lennox. There was no relative time, but then when we started getting into the newer architectures going from Mexico six. Taxi, six, sixty for and Itanium, you know the support just wasn't there in basic red hat and again, even though it's open source and we could do everything ourselves, we don't want to do everything ourselves. I mean, having an organization having this Enterprise edition of Red Hat having a company stand behind it. The software is still open. Source. We can look at the source code. We can modify it if we want, But you know what at the end of the day, were happy to hand over some of our challenge is to Red Hat and and let them do what they do best. They have great, you know, reach into the into the colonel community. They can get things done that we can't necessarily get done. So it's a great relationship. >> Yes. So that that last mile getting it on Sierra there. Is that the first time on one kind of the big showcase your computer? >> Sure. And part of the reason for that is because those big computers themselves are basically now mostly commodity. I mean, again, you talked about a Cray, Some really exotic architecture. I mean, Sierra is a collection of Lennox servers. Now, in this case, they're running the power architecture instead of X eighty six. So Red hat did a lot of work with IBM to make sure that that power was was fully supported in the rail stack. But so, you know, again that the service themselves somewhat commodity were running and video GP use those air widely used everywhere. Obviously big deal for machine learning and stuff that the main the biggest proprietary component we're still dealing was is thie interconnect. So, you know, I mentioned these clusters have to be really tightly coupled. They that performance has to be really superior and most importantly, the latent see right, they have to be super low late and see an ethernet just doesn't cut it >> So you run Infinite Band today. I'm assuming we're >> running infinite band on melon oxen finna ban on Sierra on some of our commodity clusters. We run melon ox on other ones. We run intel. Omni Path was just another flavor of of infinite band. You know, if we could use it, if we could use Ethernet, we would, because again, we would get all the benefit in the leverage of what everybody else is doing, but just just hasn't hasn't quite been able to meet our needs in that >> area now, uh, find recalled the history lesson. We got a bit from me this morning. The laboratory has been around since the early fifties, born of the Cold War. And so obviously open source was, you know? Yeah, right, you know, went well. What about your evolution to open source? I mean, ahs. This has taken hold. Now, there had to be a tipping point at some point that converted and made the laboratory believers. But if you can, can you go back to that process? And was it of was it a big moment for you big time? Or was it just a kind of a steady migration? tour. >> Well, it's interesting if you go way back. We actually wrote the operating systems for those early Cray computers. We wrote those operating systems in house because there really was no operating system that will work for us. So we've been software developers for a long time. We've been system software developers, but at that time it was all proprietary in closed source. So we know how to do that stuff. The reason I think really what happened was when these commodity clusters came along when we showed that we could build a, you know, a cluster that could perform well for our applications on that commodity hardware. We started with Red Hat, but we had to add some things on top. We had to add the software that made a bunch of individual servers function as a cluster. So all the system management stuff the resource manager of the thing that lets a schedule jobs, batch jobs. We wrote that software, the parallel file system. Those things did not exist in the open source, and we helped to write those things, and those things took on lives of their own. So luster. It's a parallel file system that we helped develop slow, Erm, if anyone outside of HBC probably hasn't heard of it, but it's a resource manager that again is very widely popular. So the lab really saw that. You know, we got a lot of visibility by contributing this stuff to the community. And I think everybody has embracing. And we develop open source software at all different layers. This >> software, Robin, you know, I'm curious how you look at Public Cloud. So, you know, when I look at the public odd, they do a lot with government agencies. They got cloud. You know, I've talked to companies that said I could have built a super computer. Here's how long and do. But I could spend it up in minutes. And you know what I need? Is that a possibility for something of yours? I understand. Maybe not the super high performance, But where does it fit in? >> Sure, Yeah. I mean, certainly for a company that has no experience or no infrastructure. I mean, we have invested a huge amount in our data center, and we have a ton of power and cooling and floor space. We have already made that investment, you know, trying to outsource that to the cloud doesn't make sense. There are definitely things. Cloud is great. We are using Gove Cloud for things like prototyping, or someone wants a server, that some architecture, that we don't have the ability to just spin it up. You know, if we had to go and buy it, it would take six months because you know, we are the government. But be able to just spin that stuff up. It's really great for what we do. We use it for open source for building test. We use it to conferences when we want to run a tutorial and spin up a bunch of instances of, you know, Lennox and and run a tutorial. But the biggest thing is at the end of the day are our most important work. Clothes are on a classified environment, and we don't have the ability to run those workloads in the cloud. And so to do it on the open side and not be ableto leverage it on the close side, it really takes away some of the value of because we really want to make the two environments look a similar is possible leverage our staff and and everything like that. So that's where Cloud just doesn't quite fit >> in for us. You were talking about, you know, the speed of, Of of Sierra. And then also mentioning El Capitan, which is thie the next generation. You're next, You know, super unbelievably fast computer to an extent of ten X that off current speed is within the next four to five years. >> Right? That's the goal. I >> mean, what those Some numbers that is there because you put a pretty impressive array up there, >> right? So Series about one hundred twenty five PETA flops and are the big Holy Grail for high performance computing is excess scale and exit flop of performance. And so, you know, El Capitan is targeted to be, you know, one point two, maybe one point five exit flops or even Mohr again. That's peak performance. It doesn't necessarily translate into what our applications, um, I can get out of the platform. But the reason you keep sometimes I think, isn't it enough isn't one hundred twenty five five's enough, But it's never enough because any time we get another platform, people figure out how to do things with it that they've never done before. Either they're solving problems faster than they could. And so now they're able to explore a solution space much faster. Or they want to look at, you know, these air simulations of three dimensional space, and they want to be able to look at it in a more fine grain level. So again, every computer we get, we can either push a workload through ten times faster. Or we can look at a simulation. You know, that's ten times more resolved than the one that >> we could do before. So do this for made and for folks at home and take the work that you do and translate that toe. Why that exponential increase in speed will make you better. What you do in terms of decision making and processing of information, >> right? So, yeah, so the thing is, these these nuclear weapons systems are very complicated. There's multi physics. There's lots of different interactions going on, and to really understand them at the lowest level. One of the reasons that's so important now is we're maintaining a stockpile that is well beyond the life span that it was designed for. You know, these nuclear weapons, some of them were built in the fifties, the sixties and seventies. They weren't designed to last this long, right? And so now they're sort of out of their design regime, and we really have to understand their behaviour and their properties as they age. So it opens up a whole nother area, you know, that we have to be able to floor and and just some of that physics has never been explored before. So, you know, the problems get more challenging the farther we get away from the design basis of these weapons, but also were really starting to do new things like eh, I am machine learning things that weren't part of our workflow before. We're starting to incorporate machine learning in with simulation again to help explore a very large problem space and be ableto find interesting areas within a simulation to focus in on. And so that's a really exciting area. And that is also an area where, you know, GPS and >> stuff just exploded. You know, the performance levels that people are seeing on these machines? Well, we thank you for your work. It is critically important, azaz, we all realize and wonderfully fascinating at the same time. So thanks for the insights here on for your time. We appreciate that. >> All right, Thanks for >> thanking Robin Goldstone. Joining us back with more here on the Cube. You're watching our coverage live from Boston of Red Hat Summit twenty nineteen.

Published Date : May 9 2019

SUMMARY :

Have some twenty nineteen brought to you by bread. center along with Sue Mittleman. Good to see you. saw you on the Keystone States this morning. And you know, of interesting technology. five flops and the like, but tell us a little bit about, you know, kind of the why and the what And we really do care about just every little piece of the hardware you know, in the open source community. And you know, it's a lot of work to make it work for you. Let's get you know, We can modify it if we want, But you know what at the end of the day, were happy to hand over Is that the first time on one kind of the But so, you know, again that the service themselves So you run Infinite Band today. You know, if we could use it, if we could use Ethernet, And so obviously open source was, you know? came along when we showed that we could build a, you know, a cluster that So, you know, when I look at the public odd, they do a lot with government agencies. You know, if we had to go and buy it, it would take six months because you know, we are the government. You were talking about, you know, the speed of, Of of Sierra. That's the goal. And so, you know, El Capitan is targeted to be, you know, one point two, So do this for made and for folks at home and take the work that you do And that is also an area where, you know, GPS and Well, we thank you for your work. of Red Hat Summit twenty nineteen.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sue MittlemanPERSON

0.99+

IBMORGANIZATION

0.99+

Robin GoldstonePERSON

0.99+

RobinPERSON

0.99+

John WallsPERSON

0.99+

ten timesQUANTITY

0.99+

Cold WarEVENT

0.99+

six monthsQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

HBCORGANIZATION

0.99+

OneQUANTITY

0.99+

LennoxORGANIZATION

0.99+

El CapitanTITLE

0.99+

thirty thousand footQUANTITY

0.98+

two environmentsQUANTITY

0.98+

one pointQUANTITY

0.98+

oneQUANTITY

0.98+

late nineteen ninetiesDATE

0.98+

MexicoLOCATION

0.98+

one hundredQUANTITY

0.98+

HarrierPERSON

0.98+

five yearsQUANTITY

0.98+

todayDATE

0.97+

fourQUANTITY

0.97+

first timeQUANTITY

0.97+

CrayORGANIZATION

0.97+

Red HatTITLE

0.97+

BostonLOCATION

0.96+

early fiftiesDATE

0.96+

red hatTITLE

0.96+

twenty nineteenQUANTITY

0.96+

SierraLOCATION

0.96+

firstQUANTITY

0.95+

this morningDATE

0.93+

tenQUANTITY

0.93+

sixQUANTITY

0.92+

one hundred twenty five flopsQUANTITY

0.9+

sixtiesDATE

0.89+

one serversQUANTITY

0.88+

ItaniumORGANIZATION

0.87+

intelORGANIZATION

0.86+

Of of SierraORGANIZATION

0.86+

FirstQUANTITY

0.83+

fiveQUANTITY

0.82+

SierraORGANIZATION

0.8+

Red HatORGANIZATION

0.8+

Red Hat Summit 2019EVENT

0.79+

RolandORGANIZATION

0.79+

Lawrence Livermore National LaboratoryORGANIZATION

0.79+

Red Hat Summit twentyEVENT

0.79+

twoQUANTITY

0.78+

Keystone StatesLOCATION

0.78+

seventiesDATE

0.78+

RedORGANIZATION

0.76+

twenty five fiveQUANTITY

0.73+

early two thousandDATE

0.71+

Lawrence LivermoreLOCATION

0.71+

SierraCOMMERCIAL_ITEM

0.69+

ErmPERSON

0.66+

MohrPERSON

0.65+

supercomputerQUANTITY

0.64+

one hundred twenty fiveQUANTITY

0.62+

PathOTHER

0.59+

BandOTHER

0.58+

National LaboratoryORGANIZATION

0.55+

bandOTHER

0.55+

Gove CloudTITLE

0.54+

nineteenQUANTITY

0.53+

fiftiesDATE

0.52+

numberQUANTITY

0.52+

Beta WolfOTHER

0.52+

dimensionalQUANTITY

0.49+

sixtyORGANIZATION

0.47+

sixCOMMERCIAL_ITEM

0.45+

AmericanPERSON

0.43+

SierraTITLE

0.42+

Keith White, HPE | HPE Discover 2022


 

>> Announcer: theCube presents HPE Discover 2022, brought to you by HPE. >> Hey, everyone. Welcome back to Las Vegas. This is Lisa Martin with Dave Vellante live at HPE Discover '22. Dave, it's great to be here. This is the first Discover in three years and we're here with about 7,000 of our closest friends. >> Yeah. You know, I tweeted out this, I think I've been to 14 Discovers between the U.S. and Europe, and I've never seen a Discover with so much energy. People are not only psyched to get back together, that's for sure, but I think HPE's got a little spring in its step and it's feeling more confident than maybe some of the past Discovers that I've been to. >> I think so, too. I think there's definitely a spring in the step and we're going to be unpacking some of that spring next with one of our alumni who joins us, Keith White's here, the executive vice president and general manager of GreenLake Cloud Services. Welcome back. >> Great. You all thanks for having me. It's fantastic that you're here and you're right, the energy is crazy at this show. It's been a lot of pent up demand, but I think what you heard from Antonio today is our strategy's changing dramatically and it's really embracing our customers and our partners. So it's great. >> Embracing the customers and the partners, the ecosystem expansion is so critical, especially the last couple of years with the acceleration of digital transformation. So much challenge in every industry, but lots of momentum on the GreenLake side, I was looking at the Q2 numbers, triple digit growth in orders, 65,000 customers over 70 services, eight new services announced just this morning. Talk to us about the momentum of GreenLake. >> The momentum's been fantastic. I mean, I'll tell you, the fact that customers are really now reaccelerating their digital transformation, you probably heard a lot, but there was a delay as we went through the pandemic. So now it's reaccelerating, but everyone's going to a hybrid, multi-cloud environment. Data is the new currency. And obviously, everyone's trying to push out to the Edge and GreenLake is that edge to cloud platform. So we're just seeing tons of momentum, not just from the customers, but partners, we've enabled the platform so partners can plug into it and offer their solutions to our customers as well. So it's exciting and it's been fun to see the momentum from an order standpoint, but one of the big numbers that you may not be aware of is we have over a 96% retention rate. So once a customer's on GreenLake, they stay on it because they're seeing the value, which has been fantastic. >> The value is absolutely critically important. We saw three great big name customers. The Home Depot was on stage this morning, Oak Ridge National Laboratory was as well, Evil Geniuses. So the momentum in the enterprise is clearly present. >> Yeah. It is. And we're hearing it from a lot of customers. And I think you guys talk a lot about, hey, there's the cloud, data and Edge, these big mega trends that are happening out there. And you look at a company like Barclays, they're actually reinventing their entire private cloud infrastructure, running over a hundred thousand workloads on HPE GreenLake. Or you look at a company like Zenseact, who's basically they do autonomous driving software. So they're doing massive parallel computing capabilities. They're pulling in hundreds of petabytes of data to then make driving safer and so you're seeing it on the data front. And then on the Edge, you look at anyone like a Patrick Terminal, for example. They run a whole terminal shipyard. They're getting data in from exporters, importers, regulators, the works and they have to real-time, analyze that data and say, where should this thing go? Especially with today's supply chain challenges, they have to be so efficient, that it's just fantastic. >> It was interesting to hear Fidelma, Keith, this morning on stage. It was the first time I'd really seen real clarity on the platform itself and that it's obviously her job is, okay, here's the platform, now, you guys got to go build on top of it. Both inside of HPE, but also externally, so your ecosystem partners. So, you mentioned the financial services companies like Barclays. We see those companies moving into the digital world by offering some of their services in building their own clouds. >> Keith: That's right. >> What's your vision for GreenLake in terms of being that platform, to assist them in doing that and the data component there? >> I think that was one of the most exciting things about not just showcasing the platform, but also the announcement of our private cloud enterprise, Cloud Service. Because in essence, what you're doing is you're creating that framework for what most companies are doing, which is they're becoming cloud service providers for their internal business units. And they're having to do showback type scenarios, chargeback type scenarios, deliver cloud services and solutions inside the organization so that open platform, you're spot on. For our ecosystem, it's fantastic, but for our customers, they get to leverage it as well for their own internal IT work that's happening. >> So you talk about hybrid cloud, you talk about private cloud, what's your vision? You know, we use this term Supercloud. This in a layer that goes across clouds. What's your thought about that? Because you have an advantage at the Edge with Aruba. Everybody talks about the Edge, but they talk about it more in the context of near Edge. >> That's right. >> We talked to Verizon and they're going far Edge, you guys are participating in that, as well as some of your partners in Red Hat and others. What's your vision for that? What I call Supercloud, is that part of the strategy? Is that more longer term or you think that's pipe dream by Dave? >> No, I think it's really thoughtful, Dave, 'cause it has to be part of the strategy. What I hear, so for example, Ford's a great example. They run Azure, AWS, and then they made a big deal with Google cloud for their internal cars and they run HPE GreenLake. So they're saying, hey, we got four clouds. How do we sort of disaggregate the usage of that? And Chris Lund, who is the VP of information technology at Liberty Mutual Insurance, he talked about it today, where he said, hey, I can deliver these services to my business unit. And they don't know, am I running on the public cloud? Am I running on our HPE GreenLake cloud? Like it doesn't matter to the end user, we've simplified that so much. So I think your Supercloud idea is super thoughtful, not to use the super term too much, that I'm super excited about because it's really clear of what our customers are trying to accomplish, which it's not about the cloud, it's about the solution and the business outcome that gets to work. >> Well, and I think it is different. I mean, it's not like the last 10 years where it was like, hey, I got my stuff to work on the different clouds and I'm replicating as much as I can, the cloud experience on-prem. I think you guys are there now and then to us, the next layer is that ecosystem enablement. So how do you see the ecosystem evolving and what role does Green Lake play there? >> Yeah. This has been really exciting. We had Tarkan Maner who runs Nutanix and Karl Strohmeyer from Equinix on stage with us as well. And what's happening with the ecosystem is, I used to say, one plus one has to equal three for our customers. So when you bring these together, it has to be that scenario, but we are joking that one plus one plus one equals five now because everything has a partner component to it. It's not about the platform, it's not about the specific cloud service, it's actually about the solution that gets delivered. And that's done with an ISV, it's done with a Colo, it's done even with the Hyperscalers. We have Azure Stack HCI as a fully integrated solution. It happens with managed service providers, delivering managed services out to their folks as well. So that platform being fully partner enabled and that ecosystem being able to take advantage of that, and so we have to jointly go to market to our customers for their business needs, their business outcomes. >> Some of the expansion of the ecosystem. we just had Red Hat on in the last hour talking about- >> We're so excited to partner with them. >> Right, what's going on there with OpenShift and Ansible and Rel, but talk about the customer influence in terms of the expansion of the ecosystem. We know we've got to meet customers where they are, they're driving it, but we know that HPE has a big presence in the enterprise and some pretty big customer names. How are they from a demand perspective? >> Well, this is where I think the uniqueness of GreenLake has really changed HPE's approach with our customers. Like in all fairness, we used to be a vendor that provided hardware components for, and we talked a lot about hardware costs and blah, blah, blah. Now, we're actually a partner with those customers. What's the business outcome you're requiring? What's the SLA that we offer you for what you're trying to accomplish? And to do that, we have to have it done with partners. And so even on the storage front, Qumulo or Cohesity. On the backup and recovery disaster recovery, yes, we have our own products, but we also partner with great companies like Veeam because it's customer choice, it's an open platform. And the Red Hat announcement is just fantastic. Because, hey, from a container platform standpoint, OpenShift provides 5,000 plus customers, 90% of the fortune 500 that they engage with, with that opportunity to take GreenLake with OpenShift and implement that container capabilities on-prem. So it's fantastic. >> We were talking after the keynote, Keith Townsend came on, myself and Lisa. And he was like, okay, what about startups? 'Cause that's kind of a hallmark of cloud. And we felt like, okay, startups are not the ideal customer profile necessarily for HPE. Although we saw Evil Geniuses up on stage, but I threw out and I'd love to get your thoughts on this that within companies, incumbents, you have entrepreneurs, they're trying to build their own clouds or Superclouds as I use the term, is that really the target for the developer audience? We've talked a lot about OpenShift with their other platforms, who says as a partner- >> We just announced another extension with Rancher and- >> Yeah. I saw that. And you have to have optionality for developers. Is that the way we should think about the target audience from a developer standpoint? >> I think it will be as we go forward. And so what Fidelma presented on stage was the new developer platform, because we have come to realize, we have to engage with the developers. They're the ones building the apps. They're the ones that are delivering the solutions for the most part. So yeah, I think at the enterprise space, we have a really strong capability. I think when you get into the sort of mid-market SMB standpoint, what we're doing is we're going directly to the managed service and cloud service providers and directly to our Disty and VARS to have them build solutions on top of GreenLake, powered by GreenLake, to then deliver to their customers because that's what the customer wants. I think on the developer side of the house, we have to speak their language, we have to provide their capabilities because they're going to start articulating apps that are going to use both the public cloud and our on-prem capabilities with GreenLake. And so that's got to work very well. And so you've heard us talk about API based and all of that sort of scenario. So it's an exciting time for us, again, moving HPE strategy into something very different than where we were before. >> Well, Keith, that speaks to ecosystem. So I don't know if you were at Microsoft, when the sweaty Steve Ballmer was working with the developers, developers. That's about ecosystem, ecosystem, ecosystem. I don't expect we're going to see Antonio replicating that. But that really is the sort of what you just described is the ecosystem developing on top of GreenLake. That's critical. >> Yeah. And this is one of the things I learned. So, being at Microsoft for as long as I was and leading the Azure business from a commercial standpoint, it was all about the partner and I mean, in all fairness, almost every solution that gets delivered has some sort of partner component to it. Might be an ISV app, might be a managed service, might be in a Colo, might be with our hybrid cloud, with our Hyperscalers, but everything has a partner component to it. And so one of the things I learned with Azure is, you have to sell through and with your ecosystem and go to that customer with a joint solution. And that's where it becomes so impactful and so powerful for what our customers are trying to accomplish. >> When we think about the data gravity and the value of data that put massive potential that it has, even Antonio talked about it this morning, being data rich but insights poor for a long time. >> Yeah. >> Every company in today's day and age has to be a data company to be competitive, there's no more option for that. How does GreenLake empower companies? GreenLake and its ecosystem empower companies to really live being data companies so that they can meet their customers where they are. >> I think it's a really great point because like we said, data's the new currency. Data's the new gold that's out there and people have to get their arms around their data estate. So then they can make these business decisions, these business insights and garner that. And Dave, you mentioned earlier, the Edge is bringing a ton of new data in, and my Zenseact example is a good one. But with GreenLake, you now have a platform that can do data and data management and really sort of establish and secure the data for you. There's no data latency, there's no data egress charges. And which is what we typically run into with the public cloud. But we also support a wide range of databases, open source, as well as the commercial ones, the sequels and those types of scenarios. But what really comes to life is when you have to do analytics on that and you're doing AI and machine learning. And this is one of the benefits I think that people don't realize with HPE is, the investments we've made with Cray, for example, we have and you saw on stage today, the largest supercomputer in the world. That depth that we have as a company, that then comes down into AI and analytics for what we can do with high performance compute, data simulations, data modeling, analytics, like that is something that we, as a company, have really deep, deep capabilities on. So it's exciting to see what we can bring to customers all for that spectrum of data. >> I was excited to see Frontier, they actually achieve, we hosted an event, co-produced event with HPE during the pandemic, Exascale day. >> Yeah. >> But we weren't quite at Exascale, we were like right on the cusp. So to see it actually break through was awesome. So HPC is clearly a differentiator for Hewlett Packard Enterprise. And you talk about the egress. What are some of the other differentiators? Why should people choose GreenLake? >> Well, I think the biggest thing is, that it's truly is a edge to cloud platform. And so you talk about Aruba and our capabilities with a network attached and network as a service capabilities, like that's fairly unique. You don't see that with the other companies. You mentioned earlier to me that compute capabilities that we've had as a company and the storage capabilities. But what's interesting now is that we're sort of taking all of that expertise and we're actually starting to deliver these cloud services that you saw on stage, private cloud, AI and machine learning, high performance computing, VDI, SAP. And now we're actually getting into these industry solutions. So we talked last year about electronic medical records, this year, we've talked about 5g. Now, we're talking about customer loyalty applications. So we're really trying to move from these sort of baseline capabilities and yes, containers and VMs and bare metal, all that stuff is important, but what's really important is the services that you run on top of that, 'cause that's the outcomes that our customers are looking at. >> Should we expect you to be accelerating? I mean, look at what you did with Azure. You look at what AWS does in terms of the feature acceleration. Should we expect HPE to replicate? Maybe not to that scale, but in a similar cadence, we're starting to see that. Should we expect that actually to go faster? >> I think you couched it really well because it's not as much about the quantity, but the quality and the uses. And so what we've been trying to do is say, hey, what is our swim lane? What is our sweet spot? Where do we have a superpower? And where are the areas that we have that superpower and how can we bring those solutions to our customers? 'Cause I think, sometimes, you get over your skis a bit, trying to do too much, or people get caught up in the big numbers, versus the, hey, what's the real meat behind it. What's the tangible outcome that we can deliver to customers? And we see just a massive TAM. I want to say my last analysis was around $42 billion in the next three years, TAM and the Azure service on-prem space. And so we think that there's nothing but upside with the core set of workloads, the core set of solutions and the cloud services that we bring. So yeah, we'll continue to innovate, absolutely, amen, but we're not in a, hey we got to get to 250 this and 300 that, we want to keep it as focused as we can. >> Well, the vast majority of the revenue in the public cloud is still compute. I mean, not withstanding, Microsoft obviously does a lot in SaaS, but I'm talking about the infrastructure and service. Still, well, I would say over 50%. And so there's a lot of the services that don't make any revenue and there's that long tail, if I hear your strategy, you're not necessarily going after that. You're focusing on the quality of those high value services and let the ecosystem sort of bring in the rest. >> This is where I think the, I mean, I love that you guys are asking me about the ecosystem because this is where their sweet spot is. They're the experts on hyper-converged or databases, a service or VDI, or even with SAP, like they're the experts on that piece of it. So we're enabling that together to our customers. And so I don't want to give you the impression that we're not going to innovate. Amen. We absolutely are, but we want to keep it within that, that again, our swim lane, where we can really add true value based on our expertise and our capabilities so that we can confidently go to customers and say, hey, this is a solution that's going to deliver this business value or this capability for you. >> The partners might be more comfortable with that than, we only have one eye sleep with one eye open in the public cloud, like, okay, what are they going to, which value of mine are they grab next? >> You're spot on. And again, this is where I think, the power of what an Edge to cloud platform like HPE GreenLake can do for our customers, because it is that sort of, I mentioned it, one plus one equals three kind of scenario for our customers so. >> So we can leave your customers, last question, Keith. I know we're only on day one of the main summit, the partner growth summit was yesterday. What's the feedback been from the customers and the ecosystem in terms of validating the direction that HPE is going? >> Well, I think the fantastic thing has been to hear from our customers. So I mentioned in my keynote recently, we had Liberty Mutual and we had Texas Children's Hospital, and they're implementing HPE GreenLake in a variety of different ways, from a private cloud standpoint to a data center consolidation. They're seeing sustainability goals happen on top of that. They're seeing us take on management for them so they can take their limited resources and go focus them on innovation and value added scenarios. So the flexibility and cost that we're providing, and it's just fantastic to hear this come to life in a real customer scenario because what Texas Children is trying to do is improve patient care for women and children like who can argue with that. >> Nobody. >> So, yeah. It's great. >> Awesome. Keith, thank you so much for joining Dave and me on the program, talking about all of the momentum with HPE Greenlake. >> Always. >> You can't walk in here without feeling the momentum. We appreciate your insights and your time. >> Always. Thank you you for the time. Yeah. Great to see you as well. >> Likewise. >> Thanks. >> For Keith White and Dave Vellante, I'm Lisa Martin. You're watching theCube live, day one coverage from the show floor at HPE Discover '22. We'll be right back with our next guest. (gentle music)

Published Date : Jun 28 2022

SUMMARY :

brought to you by HPE. This is the first Discover in three years I think I've been to 14 Discovers a spring in the step and the energy is crazy at this show. and the partners, and GreenLake is that So the momentum in the And I think you guys talk a lot about, on the platform itself and and solutions inside the organization at the Edge with Aruba. that part of the strategy? and the business outcome I mean, it's not like the last and so we have to jointly go Some of the expansion of the ecosystem. to partner with them. in terms of the expansion What's the SLA that we offer you that really the target Is that the way we should and all of that sort of scenario. But that really is the sort and leading the Azure business gravity and the value of data so that they can meet their and secure the data for you. with HPE during the What are some of the and the storage capabilities. in terms of the feature acceleration. and the cloud services that we bring. and let the ecosystem I love that you guys are the power of what an and the ecosystem in terms So the flexibility and It's great. about all of the momentum We appreciate your insights and your time. Great to see you as well. from the show floor at HPE Discover '22.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

Steve BallmerPERSON

0.99+

Chris LundPERSON

0.99+

VerizonORGANIZATION

0.99+

BarclaysORGANIZATION

0.99+

Keith WhitePERSON

0.99+

Keith TownsendPERSON

0.99+

FordORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

Karl StrohmeyerPERSON

0.99+

ZenseactORGANIZATION

0.99+

Liberty Mutual InsuranceORGANIZATION

0.99+

Las VegasLOCATION

0.99+

last yearDATE

0.99+

90%QUANTITY

0.99+

GreenLake Cloud ServicesORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Tarkan ManerPERSON

0.99+

65,000 customersQUANTITY

0.99+

fiveQUANTITY

0.99+

threeQUANTITY

0.99+

LisaPERSON

0.99+

this yearDATE

0.99+

Evil GeniusesTITLE

0.99+

VeeamORGANIZATION

0.99+

Texas Children's HospitalORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

firstQUANTITY

0.99+

Liberty MutualORGANIZATION

0.99+

around $42 billionQUANTITY

0.99+

EuropeLOCATION

0.99+

ArubaORGANIZATION

0.99+

eight new servicesQUANTITY

0.99+

todayDATE

0.99+

Texas ChildrenORGANIZATION

0.99+

yesterdayDATE

0.99+

Home DepotORGANIZATION

0.98+

oneQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

EquinixORGANIZATION

0.98+

FidelmaPERSON

0.98+

BothQUANTITY

0.98+

SupercloudORGANIZATION

0.98+

TAMORGANIZATION

0.98+

U.S.LOCATION

0.97+

bothQUANTITY

0.97+

over 50%QUANTITY

0.97+

5,000 plus customersQUANTITY

0.97+

AntonioPERSON

0.97+

hundreds of petabytesQUANTITY

0.97+

14 DiscoversQUANTITY

0.97+

EdgeORGANIZATION

0.97+

DistyORGANIZATION

0.97+

Red HatORGANIZATION

0.96+

RancherORGANIZATION

0.96+

John Schultz, HPE & Kay Firth-Butterfield, WEF | HPE Discover 2022


 

>> Announcer: "theCUBE" presents HPE Discover 2022, brought to you by HPE. >> Greetings from Las Vegas, everyone. Lisa Martin, here with Dave Vellante. We are live at HPE Discover 2022 with about 8,000 folks here at The Sands Expo Convention Center. First HPE Discover in three years, everyone jammed in that keynote room, it was standing in only. Dave and I have a couple of exciting guests we're proud to introduce you to. Please, welcome back to "theCUBE," John Schultz, the EVP and general counsel of HPE. Great to have you back here. And Kay Firth-Butterfield, the head of AI and machine learning at the World Economic Forum. Kay, thank you so much for joining us. >> Thank you. It's an absolute pleasure. >> Isn't it great to be back in person? >> Fantastic. >> John, we were saying that. >> Fantastic. >> Last time you were on "theCUBE", it was Cube Virtual. Now, here we are back. A lot of news this morning, a lot's going on. The Edge to Cloud Conferences is the theme this year. In today's Edge to Cloud world, so much data being generated at the edge, it's just going to keep proliferating. AI plays a key role in helping to synthesize that, analyze large volumes of data. Can you start by talking about the differences of the two? The synergies, what you see? >> Yeah. Absolutely. And again, it is great to be back with the two of you, and great to be with Kay, who is a leading light in the world of AI, and particularly, AI responsibility. And so, we're going to talk a little bit about that. But really, this synergistic effect between data and AI, is as tight as they come. Really, data is just the raw materials by which we drive actionable insight. And at the end of the day, it's really about insights, and that speed to insight to make the difference. AI is really what is powering our ability to take vast amounts of data. Amounts of data that we'd never conceived of, being able to process before and bring it together into actionable insights. And it's simplest form, right? AI is simply making computers do what humans used to do, but the power of computing, what you heard about frontier on the main stage today, allows us to use technology to solve problems so complex that it would take humans millions of years to do it. So, this relationship between data and AI, it's incredibly tight. You need the right raw materials. You need the right engine, that is the AI, and then you will generate insights that could really change the world. >> So, Kay, there's a data point from the World Economic Forum which really caught my attention. It says the 15.7 billion of GDP growth is going to be a result of AI by 2030, 15.7 billion added. That includes the dilutive effects where we're replacing humans with machines. What is driving this in this incremental growth? >> Well, I think obviously, it's the access to the huge amounts of data that John pointed out. But one of the things that we have to remember about, AI is that actually, AI is pretty dumb unless you give it nice, clean, organized data. And so, it's not just all data, but it's data that has been through a process that enables the AI to gain insights from it. And so, what is it? It's the compute power, the ever increasing compute power. So, in the past, we would never have thought that we could use some of the new things that we're seeing in machine learning, so even deep learning. It's only been about for a small length of time, but it's really with the compute power, with the amount of data, being able to put AI on steroids, for luck of a better analogy. And I think it's also that we are now in business, and society, being able to see some of the benefits that can be generated from AI. Listening to Oakridge talk about the medical science advances that we can create for human beings, that's extraordinary. But we're also seeing that across business. >> That's why I was going to add. As impressive as those economic figures are in terms of what value it could add from a pure financial perspective? It's really the problems that could be solved. If you think about some of the things that happened in the pandemic, and what virtual experience allowed with a phone or with a tablet to check in with a doctor who was going to curate your COVID test, right? When they invented the iPhone, nobody thought that was going to be the use. AI has that same promise, but really on a macro global scale, some of the biggest problems we're trying to solve. So, huge opportunity, but as we're going to talk about a little later, huge risk for it to be misused if it's not guided and aimed in the right direction. >> Absolutely. >> That's okay. Maybe talk about that? >> Well, I was just going to come back about some of the benefits. California has been over the last 10 years trying to reduce emissions. One wildfire, absolutely wiped out all that good work over 10 years. But with AI, we've been developing an application that allows us to say, "Tomorrow, at this location, you will have a wildfire. So, please send your services to that location." That's the power of artificial intelligence to really help with things like climate change. >> Absolutely. >> Is that a probability model that's running somewhere? >> Yeah. Absolutely >> So, I wanted to ask you, but a lot of AI today, is modeling that's done, and the edge, you mentioned the iPhone, with all this power and new processors. AI inferencing at the edge in real time making real time decisions. So, one example is predicting, the other is there's actually something going on in this place. What do you see there? >> Yeah, so, I mean, yes we are using a predictive tool to ingest the data on weather, and all these other factors in order to say, "Please put your services here tomorrow at this time." But maybe you want to talk about the next edge. >> Yeah. Yeah. Well, and I think it's not just grabbing the data to do some predictive modeling. It's now creating that end-to-end value chain where the actions are being taken in real time based on the information that's being processed, especially out at the edge. So, you're ending up, not just with predictive modeling, but it's actually transferring into actual action on the ground that's happening... You know, we like to say automagically. So, to the point where you can be making real time changes based on information that continues to make you smarter and smarter. So, it's not just a group of people taking the inputs out of a model and figuring out, okay now what am I going to do with it? The system end-to-end, allows it to happen in a way that drives a time to value that is beyond anything we've seen in the pas- >> In every industry? >> In every industry. >> Absolutely, and that's something we learned during the pandemic, one of the many things. Access to real time data to actually glean those insights that can be acted on, is no longer a nice to have. >> No. >> For companies in any industry they've got to have that now, they've got to use it as their competitive advantage. Where do you see when you're talking with customers, John? Where are they in that capability and leveraging AI on steroids, as I said? >> Yeah. I think it varies. I mean, certainly I think as you look in the medical field, et cetera, I mean, I think they've been very comfortable, and that continues to up. The use cases are so numerous there, that in some ways we've only scratched the surface, I think. But there's a high degree of acceptance, and people see the promise. Manufacturing's another area where automation and relying on some form of what used to be kind of analog intelligence, people are very comfortable with. I would say candidly, I would say the public sector and government is the furthest behind. It may be used for intelligence purposes, and things like that, but in terms of advancing overall, the common good, I think we're trailing behind there. So, that's why things like the partnership with Oak Ridge National Laboratory, and some of the other things we're seeing. That's why organizations like the World Economic Forum are so important, because we've got to make sure that this isn't just a private sector piece, It's not just about commercialization, and finding that next cost savings. It really should be about, how do you solve the world's biggest problems and do in a way that's smarter than we've ever been able to do it before? >> It's interesting, you say public sectors is behind because in some respects, they're really advanced, but they're not sharing that because it's secretive. >> Yeah. >> Right? >> That's very fair. >> Yeah. So, Kay, the other interesting stat, was that by 2023 this is like next year, 6.8 trillion will be spent on digital transformation. So, there's this intersection of data. I mean, to me, digital is data. But a lot of it was sort of, we always talk about the acceleration 'cause of the pandemic. If you weren't a digital business you were out of business, and people sort of rushed, I call it the force-march to digital. And now, are people stepping back and saying, "Okay, what can we actually do?" And maybe being more planful? Maybe you could talk about the sort of that roadmap? >> Sure. I think that that's true. And whilst I agree with John, we also see a lot of small... A lot of companies that are really only at proof of value for AI at the moment. So, we need to ensure that everybody, we take everybody, not just the governments, but everybody with us. And one of the things I'm often asked, is if you're a small or medium-sized enterprise, how can you begin to use AI at scale? And I think that's one of the exciting things about building a platform. >> That's right. >> And enabling people to use that. I think that there is also, the fact that we need to take everybody with us on this adventure because AI is so important. And it's not just important in the way it's currently being used. But if we think about these new frontier technologies like Metaverse, for example. What's the Metaverse except an application of AI? But if we don't take everybody on the journey now, then when we are using applications in the Metaverse, or building applications in the Metaverse what happens at that point? >> Think about if only certain groups of people or certain companies had access to wifi, or had access to cellular, or had access to a phone, right? The advantage and the inequality would be manifest, right? We have to think of AI and super computing in the same way, because they are going to be these raw ingredients that are going to drive the future. And if they are not, if there isn't some level of AI equality, I think the potential negative consequences of that, are incredibly high, especially in the developing world. >> Talk about it from a responsibility perspective? Getting everybody on board is challenging from a cultural standpoint, but organizations have to do it as you both articulated. But then every time we talk about AI, we've got to talk about it's used responsibly. Kay, what are your thoughts there? What are you seeing out in the field? >> Yeah, absolutely. And I started working in this in about 2014 when there were maybe a handful of us. What's exciting for me, is that now you hear it on people's lips, much more. But we still got a long way to go. We still got that understanding to happen in companies that although you might, for example, be a drug discovery company, you are probably using AI not just in drug discovery but in a number of backroom operations such as human resources, for example. We know the use of AI and human resources is very problematic. And is about to be legislated against, or at least be set up as a high risk problem use of AI by the E.U. So, across the E.U, we know what happened with GDPR that it became something that lots and lots of countries used, and we expect the AI Act to also become used in that way. So, what you need, is you need not only for companies to understand that they are gradually becoming AI companies, but also that as part of that transformation, it's taking your workers with you. It's helping them understand that AI won't actually take their jobs, it will merely help them with reskilling or working better in what they do. And they think it's also in actually helping the board to understand. We know lots of boards that don't have any clue about AI. And then, the whole of the C-suite and the trickle all down, and understanding that at the end, you've got tools, you've got data, and you've got people, and they all need to be working together to create that functional, responsible AI layer. >> When we think about it, really, when we think about responsible AI, really think about at least three pillars, right? The first off, is that privacy aspect. It's really that data ingestion part, which is respecting the privacy of the individuals, and making sure that you're collecting only the data you should be collecting to feed into your AI mechanism, right? The second, is that inclusivity and equality aspect. We've got to make sure that the actions that are coming out, the insights were generate, driving, really are inclusive. And that goes back to the right data sets. It goes back to the integrity in the algorithm. And then, you need to make sure that your AI is both human and humane. We have to make sure we don't take that human factor out and lose that connection to what really creates our shared humanity. Some of that's transparency, et cetera. I think all of those sound great. We've had some really interesting discussions about in practice, how challenging that's going to be, given the sophistication of this technology. >> When you say transparency, you're talking about the machine made a decision. I have to see how, understand how the machine made a decision. >> Algorithmic transparency. Go ahead. >> Algorithmic transparency. And the United States is actually at the moment considering something which is called the Algorithmic Accountability Act. And so, there is a movement to particularly where somebody's livelihood is affected. Say, for example, whether you get a job, and it was the algorithm that did the pre-selection in the human resources area. So, did you get a job? No, you didn't get that job. Why didn't you get that job? Why did the algorithm- >> A mortgage would be another? >> A mortgage would be another thing. And John was talking about the data, and the way that the algorithms are created. And I think, one great example, is lots of algorithms are currently created by young men under 20. They are not necessarily representative of your target audience for that algorithm. And unless you create some diversity around that group of developers, you're going to create a product that's less than optimal. So, responsible AI, isn't just about being responsible and having a social conscience, and doing things, but in a human-centered way, it's also about your bottom line as well. >> It took us a long time to recognize the kind of the shared interest we have in climate change. And the fact that the things that are happening one part of the world, can't be divorced from the impact across the the globe. When you think about AI, and the ability to create algorithms, and engage in insights, that could happen in one part of the world, and then be transferred out, not withstanding the fact, that most other countries have said, "We wouldn't do it this way, or we would require accountability. You can see the risk." It's what we call the race to the bottom. If you think about some of the things that have happened over the time in the industrial world. Often, businesses flock to those places with the least amount of safeguards that allow them to go the fastest, regardless of the collateral damage. I think we feel that same risk exists today with AI. >> So, much more we could talk about, guys, unfortunately, we are out of time. But it's so amazing to hear where we are with AI, where companies need to be. And it's the tip of the iceberg. You're very exciting. >> Yes. >> Kay and John, thank you so much for joining Dave and me. >> Thank you. >> Thank you. >> Thank you. >> It's a pleasure. >> We want to thank you for watching this segment. Lisa Martin, with Dave Vellante for our guests. We are live at HPE Discover '22. We'll be back with our next guest in just a minute. (bright upbeat music)

Published Date : Jun 28 2022

SUMMARY :

brought to you by HPE. And Kay Firth-Butterfield, the head of AI It's an absolute pleasure. is the theme this year. and that speed to insight It says the 15.7 billion of GDP growth that enables the AI to that happened in the pandemic, That's okay. about some of the benefits. and the edge, you mentioned the iPhone, talk about the next edge. So, to the point where you can be making one of the many things. they've got to use it as and that continues to up. that because it's secretive. I call it the force-march to digital. And one of the things I'm often asked, the fact that we need to The advantage and the inequality but organizations have to do So, across the E.U, we know And that goes back to the right data sets. I have to see how, Algorithmic transparency. that did the pre-selection and the way that the and the ability to create algorithms, And it's the tip of the iceberg. Kay and John, thank you so We want to thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

KayPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

John SchultzPERSON

0.99+

Algorithmic Accountability ActTITLE

0.99+

HPEORGANIZATION

0.99+

Kay Firth-ButterfieldPERSON

0.99+

Las VegasLOCATION

0.99+

15.7 billionQUANTITY

0.99+

twoQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

6.8 trillionQUANTITY

0.99+

next yearDATE

0.99+

AI ActTITLE

0.99+

Oak Ridge National LaboratoryORGANIZATION

0.99+

World Economic ForumORGANIZATION

0.99+

2023DATE

0.99+

pandemicEVENT

0.99+

TomorrowDATE

0.99+

2030DATE

0.99+

secondQUANTITY

0.99+

tomorrowDATE

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

one partQUANTITY

0.98+

bothQUANTITY

0.98+

Kay FirthPERSON

0.98+

three yearsQUANTITY

0.98+

this yearDATE

0.98+

one exampleQUANTITY

0.98+

todayDATE

0.97+

over 10 yearsQUANTITY

0.97+

millions of yearsQUANTITY

0.97+

GDPRTITLE

0.96+

COVIDOTHER

0.96+

World Economic ForumORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

this morningDATE

0.94+

HPE Discover '22EVENT

0.93+

under 20QUANTITY

0.93+

about 8,000 folksQUANTITY

0.93+

One wildfireQUANTITY

0.93+

MetaverseTITLE

0.93+

FirstQUANTITY

0.9+

ButterfieldPERSON

0.9+

United StatesLOCATION

0.9+

HPE Discover 2022EVENT

0.89+

WEFORGANIZATION

0.86+

The Sands Expo Convention CenterLOCATION

0.85+

E.ULOCATION

0.8+

last 10 yearsDATE

0.78+

Cube VirtualORGANIZATION

0.74+

three pillarsQUANTITY

0.72+

2014DATE

0.72+

aboutDATE

0.71+

E.U.ORGANIZATION

0.69+

Edge to Cloud ConferencesEVENT

0.69+

HPE DiscoverEVENT

0.68+

CaliforniaLOCATION

0.65+

EVPPERSON

0.58+

Antonio and Lisa Interview Final


 

>>Welcome lisa and thank you for being here with us today >>Antonio It's wonderful to be here with you as always. And congratulations on your launch. Very, very exciting for you. >>Well, thank you lisa and uh, we love this partnership and especially our friendship, which has been very special for me for many, many years that we have worked together, but I wanted to have a conversation with you today and obviously digital transformation is a key topic. So we know the next wave for digital transformation is here being driven by massive amounts of data and increasingly distributed world and a new set of data intensive workloads. So how do you see a lot of optimization playing a role in addressing these new requirements? >>Yeah, absolutely Antonio. And I think, you know, if you look at the depth of our partnership over the last four or five years, it's really about bringing the best to our customers. And the truth is we're in this compute mega cycle right now. So it's amazing. Um you know, when I know when you talk to customers, when we talk to customers, they all need to do more and frankly, computers becoming quite specialized. So whether, you know, you're talking about large enterprises, um, or you're talking about research institutions trying to get to the next phase of compute so that workload optimization that we're able to do with our processors, your system design and then working closely with our software partners is really the next wave of this, this compute cycle. >>So thanks lisa you talk about mega cycle. So, I want to make sure we take a moment to celebrate The launch of our new generation 10 plus compute products with the latest announcement. Hp now has the broadest a nd server portfolio in the industry spanning from the edge to exa scale. How important is this partnership and the portfolio for our customers? >>Well, um Antonio I'm so excited, first of all, congratulations on your 19 world records with Milan and gen 10 plus. It really is building on sort of our, this is our third generation of partnership with Epic. And you know, you were with me right at the very beginning actually, if you recall you joined us in Austin for our first launch of Epic, you know, four years ago and I think what we've created now is just an incredible portfolio that really does go across. You know, all of the verticals that are required. We've always talked about, how do we customize and make things easier for our customers to use together? And so very excited about your portfolio, very excited about our partnership and more importantly, what we can do for our joint customers. >>It's amazing to see 19 world records. I think I'm really proud of the work our joint team do every generation, raising the bar. And that's where, you know, we, we think we have a shared goal of ensuring our customers get the solution, the services they need any way they want it. And one way we are addressing that need is by offering what we call as a service delivered to HP Green Lake. So let me ask a question, What feedback are you hearing from your customers with respect to choice, meaning consuming as a service? This new solutions? >>Yeah, great point. I think, first of all, you know, HP Green Lake is very, very impressive. So, congratulations to really having that solution. And I think we're hearing the same thing from customers and you know, the truth is, um, the computer infrastructure is getting more complex and everyone wants to be able to deploy, sort of the right compute at the right price point um you know, in in terms of also accelerating um time to deployment with the right security with the right quality. And I think these as a service offerings are going to become more and more important um as we go forward um in the compute capabilities and you know, Green Lake is a leadership product offering and we're very very pleased and honored to be part of it. >>Okay. Yeah. We feel uh lisa we are ahead of the competition and um you know, you think about some of our competitors is not coming with their own offerings, but I think the ability to drive joint innovation is what really differentiates us and that's why we value the partnership and what we have been doing together on given the customer's choice. Finally, you know, I know you and I above incredibly excited about the joint work with you and with the U. S. Department of Energy, the Oak Ridge National Laboratory we think about large data sets and you know and the complexity of the analytics we're running but we both are going to deliver the world first exa scale system. Which is remarkable to me. So what this milestone means to you and what type of impact do you think it will >>make? Yes Antonio I think our work with Oak Ridge National Labs and HP is just really pushing the envelope on what can be done with computing. And if you think about the science that we're going to be able to enable with the first extra scale machine, I would say there's a tremendous amount of innovation that has already gone in to the machine and we're so excited about delivering it together with HP. And you know we also think that the supercomputing technology that we're developing at this broad scale will end up being very, very important for enterprise computer as well. And so it's really an opportunity to kind of take that bleeding edge and really deploy it over the next few years. So super excited about it. I think you and I have a lot to do over the next few months here, but it's an example of the great partnership and and how much we're able to do when we put our teams together, um, to really create that innovation. >>I couldn't agree more. I mean, this is an incredible milestone for for us, for our industry and honestly for the country in many ways. And we have many, many people working 24 by seven to deliver against this mission. And it's going to change the future of compute no question about it. Um, and then honestly put it to work where we needed the most to advance life science to find cures, to improve the way people live and work, lisa, thank you again for joining us today and thank you more most importantly for the incredible partnership and, and the friendship. I really enjoy working with you and your team and together, I think we can change this industry once again. So thanks for your time today. >>Thank you so much Antonio and congratulations again to you and the entire HPI team for just a fantastic portfolio launch. >>Thank you.

Published Date : Apr 23 2021

SUMMARY :

Antonio It's wonderful to be here with you as always. So how do you see a lot of optimization playing a role in addressing So whether, you know, you're talking about large enterprises, um, or you're talking about research So thanks lisa you talk about mega cycle. And you know, you were with me right at the very beginning actually, if you recall you joined us in Austin So let me ask a question, What feedback are you hearing from your customers with respect to choice, And I think we're hearing the same thing from customers and you know, the truth is, um, So what this milestone means to you and what type of impact do you think it will And if you think about the science that we're going to be able to enable with the first extra I really enjoy working with you and your team and together, Thank you so much Antonio and congratulations again to you and the entire HPI team for just a fantastic

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AntonioPERSON

0.99+

lisaPERSON

0.99+

AustinLOCATION

0.99+

EpicORGANIZATION

0.99+

Oak Ridge National LabsORGANIZATION

0.99+

HPORGANIZATION

0.99+

LisaPERSON

0.99+

U. S. Department of EnergyORGANIZATION

0.99+

todayDATE

0.99+

third generationQUANTITY

0.99+

four years agoDATE

0.99+

Green LakeORGANIZATION

0.99+

oneQUANTITY

0.98+

24QUANTITY

0.98+

bothQUANTITY

0.96+

Oak Ridge National LaboratoryORGANIZATION

0.96+

HP Green LakeORGANIZATION

0.96+

sevenQUANTITY

0.95+

first launchQUANTITY

0.95+

19 world recordsQUANTITY

0.95+

firstQUANTITY

0.94+

first extra scaleQUANTITY

0.84+

10 plus computeQUANTITY

0.81+

first exa scale systemQUANTITY

0.81+

five yearsQUANTITY

0.77+

waveEVENT

0.75+

HPIORGANIZATION

0.72+

few yearsDATE

0.69+

MilanORGANIZATION

0.69+

gen 10 plusQUANTITY

0.68+

monthsDATE

0.59+

lastDATE

0.54+

fourQUANTITY

0.47+

HPE Accelerating Next | HPE Accelerating Next 2021


 

momentum is gathering [Music] business is evolving more and more quickly moving through one transformation to the next because change never stops it only accelerates this is a world that demands a new kind of compute deployed from edge to core to cloud compute that can outpace the rapidly changing needs of businesses large and small unlocking new insights turning data into outcomes empowering new experiences compute that can scale up or scale down with minimum investment and effort guided by years of expertise protected by 360-degree security served up as a service to let it control own and manage massive workloads that weren't there yesterday and might not be there tomorrow this is the compute power that will drive progress giving your business what you need to be ready for what's next this is the compute power of hpe delivering your foundation for digital transformation welcome to accelerating next thank you so much for joining us today we have a great program we're going to talk tech with experts we'll be diving into the changing economics of our industry and how to think about the next phase of your digital transformation now very importantly we're also going to talk about how to optimize workloads from edge to exascale with full security and automation all coming to you as a service and with me to kick things off is neil mcdonald who's the gm of compute at hpe neil always a pleasure great to have you on it's great to see you dave now of course when we spoke a year ago you know we had hoped by this time we'd be face to face but you know here we are again you know this pandemic it's obviously affected businesses and people in in so many ways that we could never have imagined but in the reality is in reality tech companies have literally saved the day let's start off how is hpe contributing to helping your customers navigate through things that are so rapidly shifting in the marketplace well dave it's nice to be speaking to you again and i look forward to being able to do this in person some point the pandemic has really accelerated the need for transformation in businesses of all sizes more than three-quarters of cios report that the crisis has forced them to accelerate their strategic agendas organizations that were already transforming or having to transform faster and organizations that weren't on that journey yet are having to rapidly develop and execute a plan to adapt to this new reality our customers are on this journey and they need a partner for not just the compute technology but also the expertise and economics that they need for that digital transformation and for us this is all about unmatched optimization for workloads from the edge to the enterprise to exascale with 360 degree security and the intelligent automation all available in that as a service experience well you know as you well know it's a challenge to manage through any transformation let alone having to set up remote workers overnight securing them resetting budget priorities what are some of the barriers that you see customers are working hard to overcome simply per the organizations that we talk with are challenged in three areas they need the financial capacity to actually execute a transformation they need the access to the resource and the expertise needed to successfully deliver on a transformation and they have to find the way to match their investments with the revenues for the new services that they're putting in place to service their customers in this environment you know we have a data partner called etr enterprise technology research and the spending data that we see from them is it's quite dramatic i mean last year we saw a contraction of roughly five percent of in terms of i.t spending budgets etc and this year we're seeing a pretty significant rebound maybe a six to seven percent growth range is the prediction the challenge we see is organizations have to they've got to iterate on that i call it the forced march to digital transformation and yet they also have to balance their investments for example at the corporate headquarters which have kind of been neglected is there any help in sight for the customers that are trying to reduce their spend and also take advantage of their investment capacity i think you're right many businesses are understandably reluctant to loosen the purse strings right now given all of the uncertainty and often a digital transformation is viewed as a massive upfront investment that will pay off in the long term and that can be a real challenge in an environment like this but it doesn't need to be we work through hpe financial services to help our customers create the investment capacity to accelerate the transformation often by leveraging assets they already have and helping them monetize them in order to free up the capacity to accelerate what's next for their infrastructure and for their business so can we drill into that i wonder if we could add some specifics i mean how do you ensure a successful outcome what are you really paying attention to as those sort of markers for success well when you think about the journey that an organization is going through it's tough to be able to run the business and transform at the same time and one of the constraints is having the people with enough bandwidth and enough expertise to be able to do both so we're addressing that in two ways for our customers one is by helping them confidently deploy new solutions which we have engineered leveraging decades of expertise and experience in engineering to deliver those workload optimized portfolios that take the risk and the complexity out of assembling some of these solutions and give them a pre-packaged validated supported solution intact that simplifies that work for them but in other cases we can enhance our customers bandwidth by bringing them hpe point next experts with all of the capabilities we have to help them plan deliver and support these i.t projects and transformations organizations can get on a faster track of modernization getting greater insight and control as they do it we're a trusted partner to get the most for a business that's on this journey in making these critical compute investments to underpin the transformations and whether that's planning to optimizing to safe retirement at the end of life we can bring that expertise to bayer to help amplify what our customers already have in-house and help them accelerate and succeed in executing these transformations thank you for that neil so let's talk about some of the other changes that customers are seeing and the cloud has obviously forced customers and their suppliers to really rethink how technology is packaged how it's consumed how it's priced i mean there's no doubt in that to take green lake it's obviously a leading example of a pay as pay-as-you-scale infrastructure model and it could be applied on-prem or hybrid can you maybe give us a sense as to where you are today with green lake well it's really exciting you know from our first pay-as-you-go offering back in 2006 15 years ago to the introduction of green lake hpe has really been paving the way on consumption-based services through innovation and partnership to help meet the exact needs of our customers hpe green lake provides an experience that's the best of both worlds a simple pay-per-use technology model with the risk management of data that's under our customers direct control and it lets customers shift to everything as a service in order to free up capital and avoid that upfront expense that we talked about they can do this anywhere at any scale or any size and really hpe green lake is the cloud that comes to you like that so we've touched a little bit on how customers can maybe overcome some of the barriers to transformation what about the nature of transformations themselves i mean historically there was a lot of lip service paid to digital and and there's a lot of complacency frankly but you know that covered wrecking ball meme that so well describes that if you're not a digital business essentially you're going to be out of business so neil as things have evolved how is hpe addressed the new requirements well the new requirements are really about what customers are trying to achieve and four very common themes that we see are enabling the productivity of a remote workforce that was never really part of the plan for many organizations being able to develop and deliver new apps and services in order to service customers in a different way or drive new revenue streams being able to get insights from data so that in these tough times they can optimize their business more thoroughly and then finally think about the efficiency of an agile hybrid private cloud infrastructure especially one that now has to integrate the edge and we're really thrilled to be helping our customers accelerate all of these and more with hpe compute i want to double click on that remote workforce productivity i mean again the surveys that we see 46 percent of the cios say that productivity improved with the whole work from home remote work trend and on average those improvements were in the four percent range which is absolutely enormous i mean when you think about that how does hpe specifically you know help here what do you guys do well every organization in the world has had to adapt to a different style of working and with more remote workers than they had before and for many organizations that's going to become the new normal even post pandemic many it shops are not well equipped for the infrastructure to provide that experience because if all your workers are remote the resiliency of that infrastructure the latencies of that infrastructure the reliability of are all incredibly important so we provide comprehensive solutions expertise and as a service options that support that remote work through virtual desktop infrastructure or vdi so that our customers can support that new normal of virtual engagements online everything across industries wherever they are and that's just one example of many of the workload optimized solutions that we're providing for our customers is about taking out the guesswork and the uncertainty in delivering on these changes that they have to deploy as part of their transformation and we can deliver that range of workload optimized solutions across all of these different use cases because of our broad range of innovation in compute platforms that span from the ruggedized edge to the data center all the way up to exascale and hpc i mean that's key if you're trying to affect the digital transformation and you don't have to fine-tune you know be basically build your own optimized solutions if i can buy that rather than having to build it and rely on your r d you know that's key what else is hpe doing you know to deliver things new apps new services you know your microservices containers the whole developer trend what's going on there well that's really key because organizations are all seeking to evolve their mix of business and bring new services and new capabilities new ways to reach their customers new way to reach their employees new ways to interact in their ecosystem all digitally and that means app development and many organizations of course are embracing container technology to do that today so with the hpe container platform our customers can realize that agility and efficiency that comes with containerization and use it to provide insights to their data more and more that data of course is being machine generated or generated at the edge or the near edge and it can be a real challenge to manage that data holistically and not have silos and islands an hpe esmerald data fabric speeds the agility and access to data with a unified platform that can span across the data centers multiple clouds and even the edge and that enables data analytics that can create insights powering a data-driven production-oriented cloud-enabled analytics and ai available anytime anywhere in any scale and it's really exciting to see the kind of impact that that can have in helping businesses optimize their operations in these challenging times you got to go where the data is and the data is distributed it's decentralized so i i i like the esmerel of vision and execution there so that all sounds good but with digital transformation you get you're going to see more compute in in hybrid's deployments you mentioned edge so the surface area it's like the universe it's it's ever-expanding you mentioned you know remote work and work from home before so i'm curious where are you investing your resources from a cyber security perspective what can we count on from hpe there well you can count on continued leadership from hpe as the world's most secure industry standard server portfolio we provide an enhanced and holistic 360 degree view to security that begins in the manufacturing supply chain and concludes with a safeguarded end-of-life decommissioning and of course we've long set the bar for security with our work on silicon root of trust and we're extending that to the application tier but in addition to the security customers that are building this modern hybrid are private cloud including the integration of the edge need other elements too they need an intelligent software-defined control plane so that they can automate their compute fleets from all the way at the edge to the core and while scale and automation enable efficiency all private cloud infrastructures are competing with web scale economics and that's why we're democratizing web scale technologies like pinsando to bring web scale economics and web scale architecture to the private cloud our partners are so important in helping us serve our customers needs yeah i mean hp has really upped its ecosystem game since the the middle of last decade when when you guys reorganized it you became like even more partner friendly so maybe give us a preview of what's coming next in that regard from today's event well dave we're really excited to have hp's ceo antonio neri speaking with pat gelsinger from intel and later lisa sue from amd and later i'll have the chance to catch up with john chambers the founder and ceo of jc2 ventures to discuss the state of the market today yeah i'm jealous you guys had some good interviews coming up neil thanks so much for joining us today on the virtual cube you've really shared a lot of great insight how hpe is partnering with customers it's it's always great to catch up with you hopefully we can do so face to face you know sooner rather than later well i look forward to that and uh you know no doubt our world has changed and we're here to help our customers and partners with the technology the expertise and the economics they need for these digital transformations and we're going to bring them unmatched workload optimization from the edge to exascale with that 360 degree security with the intelligent automation and we're going to deliver it all as an as a service experience we're really excited to be helping our customers accelerate what's next for their businesses and it's been really great talking with you today about that dave thanks for having me you're very welcome it's been super neal and i actually you know i had the opportunity to speak with some of your customers about their digital transformation and the role of that hpe plays there so let's dive right in we're here on the cube covering hpe accelerating next and with me is rule siestermans who is the head of it at the netherlands cancer institute also known as nki welcome rule thank you very much great to be here hey what can you tell us about the netherlands cancer institute maybe you could talk about your core principles and and also if you could weave in your specific areas of expertise yeah maybe first introduction to the netherlands institute um we are one of the top 10 comprehensive cancers in the world and what we do is we combine a hospital for treating patients with cancer and a recent institute under one roof so discoveries we do we find within the research we can easily bring them back to the clinic and vis-a-versa so we have about 750 researchers and about 3 000 other employees doctors nurses and and my role is to uh to facilitate them at their best with it got it so i mean everybody talks about digital digital transformation to us it all comes down to data so i'm curious how you collect and take advantage of medical data specifically to support nki's goals maybe some of the challenges that your organization faces with the amount of data the speed of data coming in just you know the the complexities of data how do you handle that yeah it's uh it's it's it's challenge and uh yeah what we we have we have a really a large amount of data so we produce uh terabytes a day and we we have stored now more than one petabyte on data at this moment and yeah it's uh the challenge is to to reuse the data optimal for research and to share it with other institutions so that needs to have a flexible infrastructure for that so a fast really fast network uh big data storage environment but the real challenge is not not so much the i.t bus is more the quality of the data so we have a lot of medical systems all producing those data and how do we combine them and and yeah get the data fair so findable accessible interoperable and reusable uh for research uh purposes so i think that's the main challenge the quality of the data yeah very common themes that we hear from from other customers i wonder if you could paint a picture of your environment and maybe you can share where hpe solutions fit in what what value they bring to your organization's mission yeah i think it brings a lot of flexibility so what we did with hpe is that we we developed a software-defined data center and then a virtual workplace for our researchers and doctors and that's based on the hpe infrastructure and what we wanted to build is something that expect the needs of doctors and nurses but also the researchers and the two kind of different blood groups blood groups and with different needs so uh but we wanted to create one infrastructure because we wanted to make the connection between the hospital and the research that's that's more important so um hpe helped helped us not only with the the infrastructure itself but also designing the whole architecture of it and for example what we did is we we bought a lot of hardware and and and the hardware is really uh doing his his job between nine till five uh dennis everything is working within everyone is working within the institution but all the other time in evening and and nights hours and also the redundant environment we have for the for our healthcare uh that doesn't do nothing of much more or less uh in in those uh dark hours so what we created together with nvidia and hpe and vmware is that we we call it video by day compute by night so we reuse those those servers and those gpu capacity for computational research jobs within the research that's you mentioned flexibility for this genius and and so we're talking you said you know a lot of hard ways they're probably proliant i think synergy aruba networking is in there how are you using this environment actually the question really is when you think about nki's digital transformation i mean is this sort of the fundamental platform that you're using is it a maybe you could describe that yeah it's it's the fundamental platform to to to work on and and and what we see is that we have we have now everything in place for it but the real challenge is is the next steps we are in so we have a a software defined data center we are cloud ready so the next steps is to to make the connection to the cloud to to give more automation to our researchers so they don't have to wait a couple of weeks for it to do it but they can do it themselves with a couple of clicks so i think the basic is we are really flexible and we have a lot of opportunities for automation for example but the next step is uh to create that business value uh really for for our uh employees that's a great story and a very important mission really fascinating stuff thanks for sharing this with our audience today really appreciate your time thank you very much okay this is dave vellante with thecube stay right there for more great content you're watching accelerating next from hpe i'm really glad to have you with us today john i know you stepped out of vacation so thanks very much for joining us neil it's great to be joining you from hawaii and i love the partnership with hpe and the way you're reinventing an industry well you've always excelled john at catching market transitions and there are so many transitions and paradigm shifts happening in the market and tech specifically right now as you see companies rush to accelerate their transformation what do you see as the keys to success well i i think you're seeing actually an acceleration following the covet challenges that all of us faced and i wasn't sure that would happen it's probably at three times the paces before there was a discussion point about how quickly the companies need to go digital uh that's no longer a discussion point almost all companies are moving with tremendous feed on digital and it's the ability as the cloud moves to the edge with compute and security uh at the edge and how you deliver these services to where the majority of applications uh reside are going to determine i think the future of the next generation company leadership and it's the area that neil we're working together on in many many ways so i think it's about innovation it's about the cloud moving to the edge and an architectural play with silicon to speed up that innovation yes we certainly see our customers of all sizes trying to accelerate what's next and get that digital transformation moving even faster as a result of the environment that we're all living in and we're finding that workload focus is really key uh customers in all kinds of different scales are having to adapt and support the remote workforces with vdi and as you say john they're having to deal with the deployment of workloads at the edge with so much data getting generated at the edge and being acted upon at the edge the analytics and the infrastructure to manage that as these processes get digitized and automated is is so important for so many workflows we really believe that the choice of infrastructure partner that underpins those transformations really matters a partner that can help create the financial capacity that can help optimize your environments and enable our customers to focus on supporting their business are all super key to success and you mentioned that in the last year there's been a lot of rapid course correction for all of us a demand for velocity and the ability to deploy resources at scale is more and more needed maybe more than ever what are you hearing customers looking for as they're rolling out their digital transformation efforts well i think they're being realistic that they're going to have to move a lot faster than before and they're also realistic on core versus context they're they're their core capability is not the technology of themselves it's how to deploy it and they're we're looking for partners that can help bring them there together but that can also innovate and very often the leaders who might have been a leader in a prior generation may not be on this next move hence the opportunity for hpe and startups like vinsano to work together as the cloud moves the edge and perhaps really balance or even challenge some of the big big incumbents in this category as well as partners uniquely with our joint customers on how do we achieve their business goals tell me a little bit more about how you move from this being a technology positioning for hpe to literally helping your customers achieve their outcomes they want and and how are you changing hpe in that way well i think when you consider these transformations the infrastructure that you choose to underpin it is incredibly critical our customers need a software-defined management plan that enables them to automate so much of their infrastructure they need to be able to take faster action where the data is and to do all of this in a cloud-like experience where they can deliver their infrastructure as code anywhere from exascale through the enterprise data center to the edge and really critically they have to be able to do this securely which becomes an ever increasing challenge and doing it at the right economics relative to their alternatives and part of the right economics of course includes adopting the best practices from web scale architectures and bringing them to the heart of the enterprise and in our partnership with pensando we're working to enable these new ideas of web scale architecture and fleet management for the enterprise at scale you know what is fun is hpe has an unusual talent from the very beginning in silicon valley of working together with others and creating a win-win innovation approach if you watch what your team has been able to do and i want to say this for everybody listening you work with startups better than any other company i've seen in terms of how you do win win together and pinsando is just the example of that uh this startup which by the way is the ninth time i have done with this team a new generation of products and we're designing that together with hpe in terms of as the cloud moves to the edge how do we get the leverage out of that and produce the results for your customers on this to give the audience appeal for it you're talking with pensano alone in terms of the efficiency versus an amazon amazon web services of an order of magnitude i'm not talking 100 greater i'm talking 10x greater and things from throughput number of connections you do the jitter capability etc and it talks how two companies uniquely who believe in innovation and trust each other and have very similar cultures can work uniquely together on it how do you bring that to life with an hpe how do you get your company to really say let's harvest the advantages of your ecosystem in your advantages of startups well as you say more and more companies are faced with these challenges of hitting the right economics for the infrastructure and we see many enterprises of various sizes trying to come to terms with infrastructures that look a lot more like a service provider that require that software-defined management plane and the automation to deploy at scale and with the work we're doing with pinsando the benefits that we bring in terms of the observability and the telemetry and the encryption and the distributed network functions but also a security architecture that enables that efficiency on the individual nodes is just so key to building a competitive architecture moving forwards for an on-prem private cloud or internal service provider operation and we're really excited about the work we've done to bring that technology across our portfolio and bring that to our customers so that they can achieve those kind of economics and capabilities and go focus on their own transformations rather than building and running the infrastructure themselves artisanally and having to deal with integrating all of that great technology themselves makes tremendous sense you know neil you and i work on a board together et cetera i've watched your summarization skills and i always like to ask the question after you do a quick summary like this what are the three or four takeaways we would like for the audience to get out of our conversation well that's a great question thanks john we believe that customers need a trusted partner to work through these digital transformations that are facing them and confront the challenge of the time that the covet crisis has taken away as you said up front every organization is having to transform and transform more quickly and more digitally and working with a trusted partner with the expertise that only comes from decades of experience is a key enabler for that a partner with the ability to create the financial capacity to transform the workload expertise to get more from the infrastructure and optimize the environment so that you can focus on your own business a partner that can deliver the systems and the security and the automation that makes it easily deployable and manageable anywhere you need them at any scale whether the edge the enterprise data center or all the way up to exascale in high performance computing and can do that all as a service as we can at hpe through hpe green lake enabling our customers most critical workloads it's critical that all of that is underpinned by an ai powered digitally enabled service experience so that our customers can get on with their transformation and running their business instead of dealing with their infrastructure and really only hpe can provide this combination of capabilities and we're excited and committed to helping our customers accelerate what's next for their businesses neil it's fun i i love being your partner and your wingman our values and cultures are so similar thanks for letting me be a part of this discussion today thanks for being with us john it was great having you here oh it's friends for life okay now we're going to dig into the world of video which accounts for most of the data that we store and requires a lot of intense processing capabilities to stream here with me is jim brickmeyer who's the chief marketing and product officer at vlasics jim good to see you good to see you as well so tell us a little bit more about velocity what's your role in this tv streaming world and maybe maybe talk about your ideal customer sure sure so um we're leading provider of carrier great video solutions video streaming solutions and advertising uh technology to service providers around the globe so we primarily sell software-based solutions to uh cable telco wireless providers and broadcasters that are interested in launching their own um video streaming services to consumers yeah so this is this big time you know we're not talking about mom and pop you know a little video outfit but but maybe you can help us understand that and just the sheer scale of of the tv streaming that you're doing maybe relate it to you know the overall internet usage how much traffic are we talking about here yeah sure so uh yeah so our our customers tend to be some of the largest um network service providers around the globe uh and if you look at the uh the video traffic um with respect to the total amount of traffic that that goes through the internet video traffic accounts for about 90 of the total amount of data that uh that traverses the internet so video is uh is a pretty big component of um of how people when they look at internet technologies they look at video streaming technologies uh you know this is where we we focus our energy is in carrying that traffic as efficiently as possible and trying to make sure that from a consumer standpoint we're all consumers of video and uh make sure that the consumer experience is a high quality experience that you don't experience any glitches and that that ultimately if people are paying for that content that they're getting the value that they pay for their for their money uh in their entertainment experience i think people sometimes take it for granted it's like it's like we we all forget about dial up right those days are long gone but the early days of video was so jittery and restarting and and the thing too is that you know when you think about the pandemic and the boom in streaming that that hit you know we all sort of experienced that but the service levels were pretty good i mean how much how much did the pandemic affect traffic what kind of increases did you see and how did that that impact your business yeah sure so uh you know obviously while it was uh tragic to have a pandemic and have people locked down what we found was that when people returned to their homes what they did was they turned on their their television they watched on on their mobile devices and we saw a substantial increase in the amount of video streaming traffic um over service provider networks so what we saw was on the order of 30 to 50 percent increase in the amount of data that was traversing those networks so from a uh you know from an operator's standpoint a lot more traffic a lot more challenging to to go ahead and carry that traffic a lot of work also on our behalf and trying to help operators prepare because we could actually see geographically as the lockdowns happened [Music] certain areas locked down first and we saw that increase so we were able to help operators as as all the lockdowns happened around the world we could help them prepare for that increase in traffic i mean i was joking about dial-up performance again in the early days of the internet if your website got fifty percent more traffic you know suddenly you were you your site was coming down so so that says to me jim that architecturally you guys were prepared for that type of scale so maybe you could paint a picture tell us a little bit about the solutions you're using and how you differentiate yourself in your market to handle that type of scale sure yeah so we so we uh we really are focused on what we call carrier grade solutions which are designed for that massive amount of scale um so we really look at it you know at a very granular level when you look um at the software and and performance capabilities of the software what we're trying to do is get as many streams as possible out of each individual piece of hardware infrastructure so that we can um we can optimize first of all maximize the uh the efficiency of that device make sure that the costs are very low but one of the other challenges is as you get to millions and millions of streams and that's what we're delivering on a daily basis is millions and millions of video streams that you have to be able to scale those platforms out um in an effective in a cost effective way and to make sure that it's highly resilient as well so we don't we don't ever want a consumer to have a circumstance where a network glitch or a server issue or something along those lines causes some sort of uh glitch in their video and so there's a lot of work that we do in the software to make sure that it's a very very seamless uh stream and that we're always delivering at the very highest uh possible bit rate for consumers so that if you've got that giant 4k tv that we're able to present a very high resolution picture uh to those devices and what's the infrastructure look like underneath you you're using hpe solutions where do they fit in yeah that's right yeah so we uh we've had a long-standing partnership with hpe um and we work very closely with them to try to identify the specific types of hardware that are ideal for the the type of applications that we run so we run video streaming applications and video advertising applications targeted kinds of video advertising technologies and when you look at some of these applications they have different types of requirements in some cases it's uh throughput where we're taking a lot of data in and streaming a lot of data out in other cases it's storage where we have to have very high density high performance storage systems in other cases it's i gotta have really high capacity storage but the performance does not need to be quite as uh as high from an io perspective and so we work very closely with hpe on trying to find exactly the right box for the right application and then beyond that also talking with our customers to understand there are different maintenance considerations associated with different types of hardware so we tend to focus on as much as possible if we're going to place servers deep at the edge of the network we will make everything um maintenance free or as maintenance free as we can make it by putting very high performance solid state storage into those servers so that uh we we don't have to physically send people to those sites to uh to do any kind of maintenance so it's a it's a very cooperative relationship that we have with hpe to try to define those boxes great thank you for that so last question um maybe what the future looks like i love watching on my mobile device headphones in no distractions i'm getting better recommendations how do you see the future of tv streaming yeah so i i think the future of tv streaming is going to be a lot more personal right so uh this is what you're starting to see through all of the services that are out there is that most of the video service providers whether they're online providers or they're your traditional kinds of paid tv operators is that they're really focused on the consumer and trying to figure out what is of value to you personally in the past it used to be that services were one size fits all and um and so everybody watched the same program right at the same time and now that's uh that's we have this technology that allows us to deliver different types of content to people on different screens at different times and to advertise to those individuals and to cater to their individual preferences and so using that information that we have about how people watch and and what people's interests are we can create a much more engaging and compelling uh entertainment experience on all of those screens and um and ultimately provide more value to consumers awesome story jim thanks so much for keeping us helping us just keep entertained during the pandemic i really appreciate your time sure thanks all right keep it right there everybody you're watching hpes accelerating next first of all pat congratulations on your new role as intel ceo how are you approaching your new role and what are your top priorities over your first few months thanks antonio for having me it's great to be here with you all today to celebrate the launch of your gen 10 plus portfolio and the long history that our two companies share in deep collaboration to deliver amazing technology to our customers together you know what an exciting time it is to be in this industry technology has never been more important for humanity than it is today everything is becoming digital and driven by what i call the four key superpowers the cloud connectivity artificial intelligence and the intelligent edge they are super powers because each expands the impact of the others and together they are reshaping every aspect of our lives and work in this landscape of rapid digital disruption intel's technology and leadership products are more critical than ever and we are laser focused on bringing to bear the depth and breadth of software silicon and platforms packaging and process with at scale manufacturing to help you and our customers capitalize on these opportunities and fuel their next generation innovations i am incredibly excited about continuing the next chapter of a long partnership between our two companies the acceleration of the edge has been significant over the past year with this next wave of digital transformation we expect growth in the distributed edge and age build out what are you seeing on this front like you said antonio the growth of edge computing and build out is the next key transition in the market telecommunications service providers want to harness the potential of 5g to deliver new services across multiple locations in real time as we start building solutions that will be prevalent in a 5g digital environment we will need a scalable flexible and programmable network some use cases are the massive scale iot solutions more robust consumer devices and solutions ar vr remote health care autonomous robotics and manufacturing environments and ubiquitous smart city solutions intel and hp are partnering to meet this new wave head on for 5g build out and the rise of the distributed enterprise this build out will enable even more growth as businesses can explore how to deliver new experiences and unlock new insights from the new data creation beyond the four walls of traditional data centers and public cloud providers network operators need to significantly increase capacity and throughput without dramatically growing their capital footprint their ability to achieve this is built upon a virtualization foundation an area of intel expertise for example we've collaborated with verizon for many years and they are leading the industry and virtualizing their entire network from the core the edge a massive redesign effort this requires advancements in silicon and power management they expect intel to deliver the new capabilities in our roadmap so ecosystem partners can continue to provide innovative and efficient products with this optimization for hybrid we can jointly provide a strong foundation to take on the growth of data-centric workloads for data analytics and ai to build and deploy models faster to accelerate insights that will deliver additional transformation for organizations of all types the network transformation journey isn't easy we are continuing to unleash the capabilities of 5g and the power of the intelligent edge yeah the combination of the 5g built out and the massive new growth of data at the edge are the key drivers for the age of insight these new market drivers offer incredible new opportunities for our customers i am excited about recent launch of our new gen 10 plus portfolio with intel together we are laser focused on delivering joint innovation for customers that stretches from the edge to x scale how do you see new solutions that this helping our customers solve the toughest challenges today i talked earlier about the superpowers that are driving the rapid acceleration of digital transformation first the proliferation of the hybrid cloud is delivering new levels of efficiency and scale and the growth of the cloud is democratizing high-performance computing opening new frontiers of knowledge and discovery next we see ai and machine learning increasingly infused into every application from the edge to the network to the cloud to create dramatically better insights and the rapid adoption of 5g as i talked about already is fueling new use cases that demand lower latencies and higher bandwidth this in turn is pushing computing to the edge closer to where the data is created and consumed the confluence of these trends is leading to the biggest and fastest build out of computing in human history to keep pace with this rapid digital transformation we recognize that infrastructure has to be built with the flexibility to support a broad set of workloads and that's why over the last several years intel has built an unmatched portfolio to deliver every component of intelligent silicon our customers need to move store and process data from the cpus to fpgas from memory to ssds from ethernet to switch silicon to silicon photonics and software our 3rd gen intel xeon scalable processors and our data centric portfolio deliver new core performance and higher bandwidth providing our customers the capabilities they need to power these critical workloads and we love seeing all the unique ways customers like hpe leverage our technology and solution offerings to create opportunities and solve their most pressing challenges from cloud gaming to blood flow to brain scans to financial market security the opportunities are endless with flexible performance i am proud of the amazing innovation we are bringing to support our customers especially as they respond to new data-centric workloads like ai and analytics that are critical to digital transformation these new requirements create a need for compute that's warlord optimized for performance security ease of use and the economics of business now more than ever compute matters it is the foundation for this next wave of digital transformation by pairing our compute with our software and capabilities from hp green lake we can support our customers as they modernize their apps and data quickly they seamlessly and securely scale them anywhere at any size from edge to x scale but thank you for joining us for accelerating next today i know our audience appreciated hearing your perspective on the market and how we're partnering together to support their digital transformation journey i am incredibly excited about what lies ahead for hp and intel thank you thank you antonio great to be with you today we just compressed about a decade of online commerce progress into about 13 or 14 months so now we're going to look at how one retailer navigated through the pandemic and what the future of their business looks like and with me is alan jensen who's the chief information officer and senior vice president of the sawing group hello alan how are you fine thank you good to see you hey look you know when i look at the 100 year history plus of your company i mean it's marked by transformations and some of them are quite dramatic so you're denmark's largest retailer i wonder if you could share a little bit more about the company its history and and how it continues to improve the customer experience well at the same time keeping costs under control so vital in your business yeah yeah the company founded uh approximately 100 years ago with a department store in in oahu's in in denmark and i think in the 60s we founded the first supermarket in in denmark with the self-service and combined textile and food in in the same store and in beginning 70s we founded the first hyper market in in denmark and then the this calendar came from germany early in in 1980 and we started a discount chain and so we are actually building department store in hyber market info in in supermarket and in in the discount sector and today we are more than 1 500 stores in in three different countries in in denmark poland and germany and especially for the danish market we have a approximately 38 markets here and and is the the leader we have over the last 10 years developed further into online first in non-food and now uh in in food with home delivery with click and collect and we have done some magnetism acquisition in in the convenience with mailbox solutions to our customers and we have today also some restaurant burger chain and and we are running the starbuck in denmark so i can you can see a full plate of different opportunities for our customer in especially denmark it's an awesome story and of course the founder's name is still on the masthead what a great legacy now of course the pandemic is is it's forced many changes quite dramatic including the the behaviors of retail customers maybe you could talk a little bit about how your digital transformation at the sawing group prepared you for this shift in in consumption patterns and any other challenges that that you faced yeah i think uh luckily as for some of the you can say the core it solution in in 19 we just roll out using our computers via direct access so you can work from anywhere whether you are traveling from home and so on we introduced a new agile scrum delivery model and and we just finalized the rolling out teams in in in january february 20 and that was some very strong thing for suddenly moving all our employees from from office to to home and and more or less overnight we succeed uh continuing our work and and for it we have not missed any deadline or task for the business in in 2020 so i think that was pretty awesome to to see and for the business of course the pandemic changed a lot as the change in customer behavior more or less overnight with plus 50 80 on the online solution forced us to do some different priorities so we were looking at the food home delivery uh and and originally expected to start rolling out in in 2022 uh but took a fast decision in april last year to to launch immediately and and we have been developing that uh over the last eight months and has been live for the last three months now in the market so so you can say the pandemic really front loaded some of our strategic actions for for two to three years uh yeah that was very exciting what's that uh saying luck is the byproduct of great planning and preparation so let's talk about when you're in a company with some strong financial situation that you can move immediately with investment when you take such decision then then it's really thrilling yeah right awesome um two-part question talk about how you leverage data to support the solid groups mission and you know drive value for customers and maybe you could talk about some of the challenges you face with just the amount of data the speed of data et cetera yeah i said data is everything when you are in retail as a retailer's detail as you need to monitor your operation down to each store eats department and and if you can say we have challenge that that is that data is just growing rapidly as a year by year it's growing more and more because you are able to be more detailed you're able to capture more data and for a company like ours we need to be updated every morning as a our fully updated sales for all unit department single sku selling in in the stores is updated 3 o'clock in the night and send out to all top management and and our managers all over the company it's actually 8 000 reports going out before six o'clock every day in the morning we have introduced a loyalty program and and you are capturing a lot of data on on customer behavior what is their preferred offers what is their preferred time in the week for buying different things and all these data is now used to to personalize our offers to our cost of value customers so we can be exactly hitting the best time and and convert it to sales data is also now used for what we call intelligent price reductions as a so instead of just reducing prices with 50 if it's uh close to running out of date now the system automatically calculate whether a store has just enough to to finish with full price before end of day or actually have much too much and and need to maybe reduce by 80 before as being able to sell so so these automated [Music] solutions built on data is bringing efficiency into our operation wow you make it sound easy these are non-trivial items so congratulations on that i wonder if we could close hpe was kind enough to introduce us tell us a little bit about the infrastructure the solutions you're using how they differentiate you in the market and i'm interested in you know why hpe what distinguishes them why the choice there yeah as a when when you look out a lot is looking at moving data to the cloud but we we still believe that uh due to performance due to the availability uh more or less on demand we we still don't see the cloud uh strong enough for for for selling group uh capturing all our data we have been quite successfully having one data truth across the whole con company and and having one just one single bi solution and having that huge amount of data i think we have uh one of the 10 largest sub business warehouses in global and but on the other hand we also want to be agile and want to to scale when needed so getting close to a cloud solution we saw it be a green lake as a solution getting close to the cloud but still being on-prem and could deliver uh what we need to to have a fast performance on on data but still in a high quality and and still very secure for us to run great thank you for that and thank alan thanks so much for your for your time really appreciate your your insights and your congratulations on the progress and best of luck in the future thank you all right keep it right there we have tons more content coming you're watching accelerating next from hpe [Music] welcome lisa and thank you for being here with us today antonio it's wonderful to be here with you as always and congratulations on your launch very very exciting for you well thank you lisa and we love this partnership and especially our friendship which has been very special for me for the many many years that we have worked together but i wanted to have a conversation with you today and obviously digital transformation is a key topic so we know the next wave of digital transformation is here being driven by massive amounts of data an increasingly distributed world and a new set of data intensive workloads so how do you see world optimization playing a role in addressing these new requirements yeah no absolutely antonio and i think you know if you look at the depth of our partnership over the last you know four or five years it's really about bringing the best to our customers and you know the truth is we're in this compute mega cycle right now so it's amazing you know when i know when you talk to customers when we talk to customers they all need to do more and and frankly compute is becoming quite specialized so whether you're talking about large enterprises or you're talking about research institutions trying to get to the next phase of uh compute so that workload optimization that we're able to do with our processors your system design and then you know working closely with our software partners is really the next wave of this this compute cycle so thanks lisa you talk about mega cycle so i want to make sure we take a moment to celebrate the launch of our new generation 10 plus compute products with the latest announcement hp now has the broadest amd server portfolio in the industry spanning from the edge to exascale how important is this partnership and the portfolio for our customers well um antonio i'm so excited first of all congratulations on your 19 world records uh with uh milan and gen 10 plus it really is building on you know sort of our you know this is our third generation of partnership with epic and you know you are with me right at the very beginning actually uh if you recall you joined us in austin for our first launch of epic you know four years ago and i think what we've created now is just an incredible portfolio that really does go across um you know all of the uh you know the verticals that are required we've always talked about how do we customize and make things easier for our customers to use together and so i'm very excited about your portfolio very excited about our partnership and more importantly what we can do for our joint customers it's amazing to see 19 world records i think i'm really proud of the work our joint team do every generation raising the bar and that's where you know we we think we have a shared goal of ensuring that customers get the solution the services they need any way they want it and one way we are addressing that need is by offering what we call as a service delivered to hp green lake so let me ask a question what feedback are you hearing from your customers with respect to choice meaning consuming as a service these new solutions yeah now great point i think first of all you know hpe green lake is very very impressive so you know congratulations um to uh to really having that solution and i think we're hearing the same thing from customers and you know the truth is the compute infrastructure is getting more complex and everyone wants to be able to deploy sort of the right compute at the right price point um you know in in terms of also accelerating time to deployment with the right security with the right quality and i think these as a service offerings are going to become more and more important um as we go forward in the compute uh you know capabilities and you know green lake is a leadership product offering and we're very very you know pleased and and honored to be part of it yeah we feel uh lisa we are ahead of the competition and um you know you think about some of our competitors now coming with their own offerings but i think the ability to drive joint innovation is what really differentiate us and that's why we we value the partnership and what we have been doing together on giving the customers choice finally you know i know you and i are both incredibly excited about the joint work we're doing with the us department of energy the oak ridge national laboratory we think about large data sets and you know and the complexity of the analytics we're running but we both are going to deliver the world's first exascale system which is remarkable to me so what this milestone means to you and what type of impact do you think it will make yes antonio i think our work with oak ridge national labs and hpe is just really pushing the envelope on what can be done with computing and if you think about the science that we're going to be able to enable with the first exascale machine i would say there's a tremendous amount of innovation that has already gone in to the machine and we're so excited about delivering it together with hpe and you know we also think uh that the super computing technology that we're developing you know at this broad scale will end up being very very important for um you know enterprise compute as well and so it's really an opportunity to kind of take that bleeding edge and really deploy it over the next few years so super excited about it i think you know you and i have a lot to do over the uh the next few months here but it's an example of the great partnership and and how much we're able to do when we put our teams together um to really create that innovation i couldn't agree more i mean this is uh an incredible milestone for for us for our industry and honestly for the country in many ways and we have many many people working 24x7 to deliver against this mission and it's going to change the future of compute no question about it and then honestly put it to work where we need it the most to advance life science to find cures to improve the way people live and work but lisa thank you again for joining us today and thank you more most importantly for the incredible partnership and and the friendship i really enjoy working with you and your team and together i think we can change this industry once again so thanks for your time today thank you so much antonio and congratulations again to you and the entire hpe team for just a fantastic portfolio launch thank you okay well some pretty big hitters in those keynotes right actually i have to say those are some of my favorite cube alums and i'll add these are some of the execs that are stepping up to change not only our industry but also society and that's pretty cool and of course it's always good to hear from the practitioners the customer discussions have been great so far today now the accelerating next event continues as we move to a round table discussion with krista satrathwaite who's the vice president and gm of hpe core compute and krista is going to share more details on how hpe plans to help customers move ahead with adopting modern workloads as part of their digital transformations krista will be joined by hpe subject matter experts chris idler who's the vp and gm of the element and mark nickerson director of solutions product management as they share customer stories and advice on how to turn strategy into action and realize results within your business thank you for joining us for accelerate next event i hope you're enjoying it so far i know you've heard about the industry challenges the i.t trends hpe strategy from leaders in the industry and so today what we want to do is focus on going deep on workload solutions so in the most important workload solutions the ones we always get asked about and so today we want to share with you some best practices some examples of how we've helped other customers and how we can help you all right with that i'd like to start our panel now and introduce chris idler who's the vice president and general manager of the element chris has extensive uh solution expertise he's led hpe solution engineering programs in the past welcome chris and mark nickerson who is the director of product management and his team is responsible for solution offerings making sure we have the right solutions for our customers welcome guys thanks for joining me thanks for having us krista yeah so i'd like to start off with one of the big ones the ones that we get asked about all the time what we've been all been experienced in the last year remote work remote education and all the challenges that go along with that so let's talk a little bit about the challenges that customers have had in transitioning to this remote work and remote education environment uh so i i really think that there's a couple of things that have stood out for me when we're talking with customers about vdi first obviously there was a an unexpected and unprecedented level of interest in that area about a year ago and we all know the reasons why but what it really uncovered was how little planning had gone into this space around a couple of key dynamics one is scale it's one thing to say i'm going to enable vdi for a part of my workforce in a pre-pandemic environment where the office was still the the central hub of activity for work uh it's a completely different scale when you think about okay i'm going to have 50 60 80 maybe 100 of my workforce now distributed around the globe um whether that's in an educational environment where now you're trying to accommodate staff and students in virtual learning uh whether that's uh in the area of things like uh formula one racing where we had uh the desire to still have events going on but the need for a lot more social distancing not as many people able to be trackside but still needing to have that real-time experience this really manifested in a lot of ways and scale was something that i think a lot of customers hadn't put as much thought into initially the other area is around planning for experience a lot of times the vdi experience was planned out with very specific workloads or very specific applications in mind and when you take it to a more broad-based environment if we're going to support multiple functions multiple lines of business there hasn't been as much planning or investigation that's gone into the application side and so thinking about how graphically intense some applications are one customer that comes to mind would be tyler isd who did a fairly large roll out pre-pandemic and as part of their big modernization effort what they uncovered was even just changes in standard windows applications had become so much more graphically intense with windows 10 with the latest updates with programs like adobe that they were really needing to have an accelerated experience for a much larger percentage of their install base than than they had counted on so in addition to planning for scale you also need to have that visibility into what are the actual applications that are going to be used by these remote users how graphically intense those might be what's the login experience going to be as well as the operating experience and so really planning through that experience side as well as the scale and the number of users uh is is kind of really two of the biggest most important things that i've seen yeah mark i'll i'll just jump in real quick i think you you covered that pretty comprehensively there and and it was well done the couple of observations i've made one is just that um vdi suddenly become like mission critical for sales it's the front line you know for schools it's the classroom you know that this isn't a cost cutting measure or a optimization nit measure anymore this is about running the business in a way it's a digital transformation one aspect of about a thousand aspects of what does it mean to completely change how your business does and i think what that translates to is that there's no margin for error right you really need to deploy this in a way that that performs that understands what you're trying to use it for that gives that end user the experience that they expect on their screen or on their handheld device or wherever they might be whether it's a racetrack classroom or on the other end of a conference call or a boardroom right so what we do in in the engineering side of things when it comes to vdi or really understand what's a tech worker what's a knowledge worker what's a power worker what's a gp really going to look like what's time of day look like you know who's using it in the morning who's using it in the evening when do you power up when do you power down does the system behave does it just have the it works function and what our clients can can get from hpe is um you know a worldwide set of experiences that we can apply to making sure that the solution delivers on its promises so we're seeing the same thing you are krista you know we see it all the time on vdi and on the way businesses are changing the way they do business yeah and it's funny because when i talk to customers you know one of the things i heard that was a good tip is to roll it out to small groups first so you could really get a good sense of what the experience is before you roll it out to a lot of other people and then the expertise it's not like every other workload that people have done before so if you're new at it make sure you're getting the right advice expertise so that you're doing it the right way okay one of the other things we've been talking a lot about today is digital transformation and moving to the edge so now i'd like to shift gears and talk a little bit about how we've helped customers make that shift and this time i'll start with chris all right hey thanks okay so you know it's funny when it comes to edge because um the edge is different for for every customer in every client and every single client that i've ever spoken to of hp's has an edge somewhere you know whether just like we were talking about the classroom might be the edge but but i think the industry when we're talking about edge is talking about you know the internet of things if you remember that term from not to not too long ago you know and and the fact that everything's getting connected and how do we turn that into um into telemetry and and i think mark's going to be able to talk through a couple of examples of clients that we have in things like racing and automotive but what we're learning about edge is it's not just how do you make the edge work it's how do you integrate the edge into what you're already doing and nobody's just the edge right and and so if it's if it's um ai mldl there's that's one way you want to use the edge if it's a customer experience point of service it's another you know there's yet another way to use the edge so it turns out that having a broad set of expertise like hpe does to be able to understand the different workloads that you're trying to tie together including the ones that are running at the at the edge often it involves really making sure you understand the data pipeline you know what information is at the edge how does it flow to the data center how does it flow and then which data center uh which private cloud which public cloud are you using i think those are the areas where where we really sort of shine is that we we understand the interconnectedness of these things and so for example red bull and i know you're going to talk about that in a minute mark um uh the racing company you know for them the the edge is the racetrack and and you know milliseconds or partial seconds winning and losing races but then there's also an edge of um workers that are doing the design for for the cars and how do they get quick access so um we have a broad variety of infrastructure form factors and compute form factors to help with the edge and this is another real advantage we have is that we we know how to put the right piece of equipment with the right software we also have great containerized software with our esmeral container platform so we're really becoming um a perfect platform for hosting edge-centric workloads and applications and data processing yeah it's uh all the way down to things like our superdome flex in the background if you have some really really really big data that needs to be processed and of course our workhorse proliance that can be configured to support almost every um combination of workload you have so i know you started with edge krista but but and we're and we nail the edge with those different form factors but let's make sure you know if you're listening to this this show right now um make sure you you don't isolate the edge and make sure they integrate it with um with the rest of your operation mark you know what did i miss yeah to that point chris i mean and this kind of actually ties the two things together that we've been talking about here but the edge uh has become more critical as we have seen more work moving to the edge as where we do work changes and evolves and the edge has also become that much more closer because it has to be that much more connected um to your point uh talking about where that edge exists that edge can be a lot of different places but the one commonality really is that the edge is is an area where work still needs to get accomplished it can't just be a collection point and then everything gets shipped back to a data center or back to some some other area for the work it's where the work actually needs to get done whether that's edge work in a use case like vdi or whether that's edge work in the case of doing real-time analytics you mentioned red bull racing i'll i'll bring that up i mean you talk about uh an area where time is of the essence everything about that sport comes down to time you're talking about wins and losses that are measured as you said in milliseconds and that applies not just to how performance is happening on the track but how you're able to adapt and modify the needs of the car uh adapt to the evolving conditions on the track itself and so when you talk about putting together a solution for an edge like that you're right it can't just be here's a product that's going to allow us to collect data ship it back someplace else and and wait for it to be processed in a couple of days you have to have the ability to analyze that in real time when we pull together a solution involving our compute products our storage products our networking products when we're able to deliver that full package solution at the edge what you see are results like a 50 decrease in processing time to make real-time analytic decisions about configurations for the car and adapting to to real-time uh test and track conditions yeah really great point there um and i really love the example of edge and racing because i mean that is where it all every millisecond counts um and so important to process that at the edge now switching gears just a little bit let's talk a little bit about some examples of how we've helped customers when it comes to business agility and optimizing their workload for maximum outcome for business agility let's talk about some things that we've done to help customers with that mark yeah give it a shot so when we when we think about business agility what you're really talking about is the ability to to implement on the fly to be able to scale up to scale down the ability to adapt to real time changing situations and i think the last year has been has been an excellent example of exactly how so many businesses have been forced to do that i think one of the areas that that i think we've probably seen the most ability to help with customers in that agility area is around the space of private and hybrid clouds if you take a look at the need that customers have to to be able to migrate workloads and migrate data between public cloud environments app development environments that may be hosted on-site or maybe in the cloud the ability to move out of development and into production and having the agility to then scale those application rollouts up having the ability to have some of that some of that private cloud flexibility in addition to a public cloud environment is something that is becoming increasingly crucial for a lot of our customers all right well i we could keep going on and on but i'll stop it there uh thank you so much uh chris and mark this has been a great discussion thanks for sharing how we helped other customers and some tips and advice for approaching these workloads i thank you all for joining us and remind you to look at the on-demand sessions if you want to double click a little bit more into what we've been covering all day today you can learn a lot more in those sessions and i thank you for your time thanks for tuning in today many thanks to krista chris and mark we really appreciate you joining today to share how hpe is partnering to facilitate new workload adoption of course with your customers on their path to digital transformation now to round out our accelerating next event today we have a series of on-demand sessions available so you can explore more details around every step of that digital transformation from building a solid infrastructure strategy identifying the right compute and software to rounding out your solutions with management and financial support so please navigate to the agenda at the top of the page to take a look at what's available i just want to close by saying that despite the rush to digital during the pandemic most businesses they haven't completed their digital transformations far from it 2020 was more like a forced march than a planful strategy but now you have some time you've adjusted to this new abnormal and we hope the resources that you find at accelerating next will help you on your journey best of luck to you and be well [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Applause] [Music] [Applause] so [Music] [Applause] [Music] you

Published Date : Apr 19 2021

SUMMARY :

and the thing too is that you know when

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
jim brickmeyerPERSON

0.99+

lisaPERSON

0.99+

antonioPERSON

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

sixQUANTITY

0.99+

2006DATE

0.99+

two companiesQUANTITY

0.99+

alan jensenPERSON

0.99+

2022DATE

0.99+

46 percentQUANTITY

0.99+

denmarkLOCATION

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

3 o'clockDATE

0.99+

windows 10TITLE

0.99+

10xQUANTITY

0.99+

mark nickersonPERSON

0.99+

germanyLOCATION

0.99+

30QUANTITY

0.99+

hawaiiLOCATION

0.99+

tomorrowDATE

0.99+

fifty percentQUANTITY

0.99+

50QUANTITY

0.99+

360-degreeQUANTITY

0.99+

100QUANTITY

0.99+

360 degreeQUANTITY

0.99+

chrisPERSON

0.99+

100 yearQUANTITY

0.99+

80QUANTITY

0.99+

austinLOCATION

0.99+

360 degreeQUANTITY

0.99+

8 000 reportsQUANTITY

0.99+

april last yearDATE

0.99+

kristaPERSON

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

krista satrathwaitePERSON

0.99+

january february 20DATE

0.99+

netherlands cancer instituteORGANIZATION

0.99+

last yearDATE

0.99+

fourQUANTITY

0.99+

five yearsQUANTITY

0.99+

amazonORGANIZATION

0.99+

john chambersPERSON

0.99+

windowsTITLE

0.99+

two-partQUANTITY

0.99+

ninth timeQUANTITY

0.99+

more than 1 500 storesQUANTITY

0.99+

verizonORGANIZATION

0.99+

three yearsQUANTITY

0.99+

johnPERSON

0.99+

neil mcdonaldPERSON

0.99+

a year agoDATE

0.99+

pat gelsingerPERSON

0.99+

netherlands instituteORGANIZATION

0.99+

markPERSON

0.99+

vinsanoORGANIZATION

0.99+

lisa suePERSON

0.99+

four years agoDATE

0.98+

pandemicEVENT

0.98+

two kindQUANTITY

0.98+

HPEORGANIZATION

0.98+

about 750 researchersQUANTITY

0.98+

two thingsQUANTITY

0.98+

50 percentQUANTITY

0.98+

14 monthsQUANTITY

0.98+

adobeTITLE

0.98+

Jamie Thomas, IBM | IBM Think 2020


 

Narrator: From theCUBE studios in Palo Alto and Boston, it's theCUBE, covering IBM Think, brought to you by IBM. >> We're back. You're watching theCUBE and our coverage of IBM Think 2020, the digital IBM thinking. We're here with Jamie Thomas, who's the general manager of strategy and development for IBM Systems. Jamie, great to see you. >> It's great to see you as always. >> You have been knee deep in qubits, the last couple years. And we're going to talk quantum. We've talked quantum a lot in the past, but it's a really interesting field. We spoke to you last year at IBM Think about this topic. And a year in this industry is a long time, but so give us the update what's new in quantum land? >> Well, Dave first of all, I'd like to say that in this environment we find ourselves in, I think we can all appreciate why innovation of this nature is perhaps more important going forward, right? If we look at some of the opportunities to solve some of the unsolvable problems, or solve problems much more quickly, in the case of pharmaceutical research. But for us in IBM, it's been a really busy year. First of all, we worked to advance the technology, which is first and foremost in terms of this journey to quantum. We just brought online our 53 qubit computer, which also has a quantum volume of 32, which we can talk about. And we've continued to advance the software stack that's attached to the technology because you have to have both the software and the hardware thing, right rate and pace. We've advanced our new network, which you and I have spoken about, which are those individuals across the commercial enterprises, academic and startups, who are working with us to co-create around quantum to help us understand the use cases that really can be solved in the future with quantum. And we've also continued to advance our community, which is serving as well in this new digital world that we're finding ourselves in, in terms of reaching out to developers. Now, we have over 300,000 unique downloads of the programming model that represents the developers that we're touching out there every day with quantum. These developers have, in the last year, have run over 140 billion quantum circuits. So, our machines in the cloud are quite active, and the cloud model, of course, is serving us well. The data's, in addition, to all the other things that I mentioned. >> So Jamie, what metrics are you trying to optimize on? You mentioned 53 qubits I saw that actually came online, I think, last fall. So you're nearly six months in now, which is awesome. But what are you measuring? Are you measuring stability or coherence or error rates? Number of qubits? What are the things that you're trying to optimize on to measure progress? >> Well, that's a good question. So we have this metric that we've defined over the last year or two called quantum volume. And quantum volume 32, which is the capacity of our current machine really is a representation of many of the things that you mentioned. It represents the power of the quantum machine, if you will. It includes a definition of our ability to provide error correction, to maintain states, to really accomplish workloads with the computer. So there's a number of factors that go into quantum volume, which we think are important. Now, qubits and the number of qubits is just one such metric. It really depends on the coherence and the effect of error correction, to really get the value out of the machine, and that's a very important metric. >> Yeah, we love to boil things down to a single metric. It's more complicated than that >> Yeah, yeah. >> specifically with quantum. So, talk a little bit more about what clients are doing and I'm particularly interested in the ecosystem that you're forming around quantum. >> Well, as I said, the ecosystem is both the network, which are those that are really intently working with us to co-create because we found, through our long history in IBM, that co-creation is really important. And also these researchers and developers realize that some of our developers today are really researchers, but as you as you go forward you get many different types of developers that are part of this mix. But in terms of our ecosystem, we're really fundamentally focused on key problems around chemistry, material science, financial services. And over the last year, there's over 200 papers that have been written out there from our network that really embody their work with us on this journey. So we're looking at things like quadratic speed up of things like Monte Carlo simulation, which is used in the financial services arena today to quantify risk. There's papers out there around topics like trade settlements, which in the world today trade settlements is a very complex domain with very interconnected complex rules and trillions of dollars in the purview of trade settlement. So, it's just an example. Options pricing, so you see examples around options pricing from corporations like JPMC in the area of financial services. And likewise in chemistry, there's a lot of research out there focused on batteries. As you can imagine, getting everything to electric powered batteries is an important topic. But today, the way we manufacture batteries can in fact create air pollution, in terms of the process, as well as we want batteries to have more retention in life to be more effective in energy conservation. So, how do we create batteries and still protect our environment, as we all would like to do? And so we've had a lot of research around things like the next generation of electric batteries, which is a key topic. But if you can think, you know Dave, there's so many topics here around chemistry, also pharmaceuticals that could be advanced with a quantum computer. Obviously, if you look at the COVID-19 news, our supercomputer that we installed at Oak Ridge National Laboratory for instance, is being used to analyze 8000 different compounds for specifically around COVID-19 and the possibilities of using those compounds to solve COVID-19, or influence it in a positive manner. You can think of the quantum computer when it comes online as an accelerator to a supercomputer like that, helping speed up this kind of research even faster than what we're able to do with something like the Summit supercomputer. Oak Ridge is one of our prominent clients with the quantum technology, and they certainly see it that way, right, as an accelerator to the capacity they already have. So a great example that I think is very germane in the time that we find ourselves in. >> How 'about startups in this ecosystem? Are you able to-- I mean there must be startups popping up all over the place for this opportunity. Are you working with any startups or incubating any startups? Can you talk about that? >> Oh yep. Absolutely. There's about a third of our network are in VC startups and there's a long list of them out there. They're focused on many different aspects of quantum computing. Many of 'em are focused on what I would call loosely, the programming model, looking at improving algorithms across different industries, making it easier for those that are, perhaps more skilled in domains, whether that is chemistry or financial services or mathematics, to use the power of the quantum computer. Many of those startups are leveraging our Qiskit, our quantum information science open programming model that we put out there so it's open. Many of the startups are using that programming model and then adding their own secret sauce, if you will, to understand how they can help bring on users in different ways. So it depends on their domain. You see some startups that are focused on the hardware as well, of course, looking at different hardware technologies that can be used to solve quantum. I would say I feel like more of them are focused on the software programming model. >> Well Jamie, it was interesting hear you talk about what some of the clients are doing. I mean obviously in pharmaceuticals, and battery manufacturers do a lot of advanced R and D, but you mentioned financial services, you know JPMC. It's almost like they're now doing advanced R and D trying to figure out how they can apply quantum to their business down the road. >> Absolutely, and we have a number of financial institutions that we've announced as part of the network. JPMC is just one of our premiere references who have written papers about it. But I would tell you that in the world of Monte Carlo simulation, options pricing, risk management, a small change can make a big difference in dollars. So we're talking about operations that in many cases they could achieve, but not achieve in the right amount of time. The ability to use quantum as an accelerator for these kind of operations is very important. And I can tell you, even in the last few weeks, we've had a number of briefings with financial companies for five hours on this topic. Looking at what could they do and learning from the work that's already done out there. I think this kind of advanced research is going to be very important. We also had new members that we announced at the beginning of the year at the CES show. Delta Airlines joined. First Transportation Company, Amgen joined, a pharmaceutical, an example of pharmaceuticals, as well as a number of other research organizations. Georgia Tech, University of New Mexico, Anthem Insurance, just an example of the industries that are looking to take advantage of this kind of technology as it matures. >> Well, and it strikes me too, that as you start to bring machine intelligence into the equation, it's a game changer. I mean, I've been saying that it's not Moore's Law driving the industry anymore, it's this combination of data, AI, and cloud for scale, but now-- Of course there are alternative processors going on, we're seeing that, but now as you bring in quantum that actually adds to that innovation cocktail, doesn't it? >> Yes, and as you recall when you and I spoke last year about this, there are certain domains today where you really cannot get as much effective gain out of classical computing. And clearly, chemistry is one of those domains because today, with classical computers, we're really unable to model even something as simple as a caffeine molecule, which we're all so very familiar with. I have my caffeine here with me today. (laughs) But you know, clearly, to the degree we can actually apply molecular modeling and the advantages that quantum brings to those fields, we'll be able to understand so much more about materials that affect all of us around the world, about energy, how to explore energy, and create energy without creating the carbon footprint and the bad outcomes associated with energy creation, and how to obviously deal with pharmaceutical creation much more effectively. There's a real promise in a lot of these different areas. >> I wonder if you could talk a little bit about some of the landscape and I'm really interested in what IBM brings to the table that's sort of different. You're seeing a lot of companies enter this space, some big and many small, what's the unique aspect that IBM brings to the table? You've mentioned co-creating before. Are you co-creating, coopertating with some of the other big guys? Maybe you could address that. >> Well, obviously this is a very hot topic, both within the technology industry and across government entities. I think that some of the key values we bring to the table is we are the only vendor right now that has a fleet of systems available in the cloud, and we've been out there for several years, enabling clients to take advantage of our capacity. We have both free access and premium access, which is what the network is paying for because they get access to the highest fidelity machines. Clearly, we understand intently, classical computing and the ability to leverage classical with quantum for advantage across many of these different industries, which I think is unique. We understand the cloud experience that we're bringing to play here with quantum since day one, and most importantly, I think we have strong relationships. We have, in many cases, we're still running the world. I see it every day coming through my clients' port vantage point. We understand financial services. We understand healthcare. We understand many of these important domains, and we're used to solving tough problems. So, we'll bring that experience with our clients and those industries to the table here and help them on this journey. >> You mentioned your experience in sort of traditional computing, basically if I understand it correctly, you're still using traditional silicon microprocessors to read and write the data that's coming out of quantum. I don't know if they're sitting physically side by side, but you've got this big cryogenic unit, cables coming in. That's the sort of standard for some time. It reminds me, can it go back to ENIAC? And now, which is really excites me because you look at the potential to miniaturize this over the next several decades, but is that right, you're sort of side by side with traditional computing approaches? >> Right, effectively what we do with quantum today does not happen without classical computers. The front end, you're coming in on classical computers. You're storing your data on classical computers, so that is the model that we're in today, and that will continue to happen. In terms of the quantum processor itself, it is a silicon based processor, but it's a superconducting technology, in our case, that runs inside that cryogenics unit at a very cold temperature. It is powered by next-generation electronics that we in IBM have innovated around and created our own electronic stack that actually sends microwave pulses into the processor that resides in the cryogenics unit. So when you think about the components of the system, you have to be innovating around the processor, the cryogenics unit, the custom electronic stack, and the software all at the same time. And yes, we're doing that in terms of being surrounded by this classical backplane that allows our Q network, as well as the developers around the world to actually communicate with these systems. >> The other thing that I really like about this conversation is it's not just R and D for the sake of R and D, you've actually, you're working with partners to, like you said, co-create, customers, financial services, airlines, manufacturing, et cetera. I wonder if you could maybe kind of address some of the things that you see happening in the sort of near to midterm, specifically as it relates to where people start. If I'm interested in this, what do I do? Do I need new skills? Do I need-- It's in the cloud, right? >> Yeah. >> So I can spit it up there, but where do people get started? >> Well they can certainly come to the Quantum Experience, which is our cloud experience and start to try out the system. So, we have both easy ways to get started with visual composition of circuits, as well as using the programming model that I mentioned, the Qiskit programming model. We've provided extensive YouTube videos out there already. So, developers who are interested in starting to learn about quantum can go out there and subscribe to our YouTube channel. We've got over 40 assets already recorded out there, and we continue to do those. We did one last week on quantum circuits for those that are more interested in that particular domain, but I think that's a part of this journey is making sure that we have all the assets out there digitally available for those around the world that want to interact with us. We have tremendous amount of education. We're also providing education to our business partners. One of our key network members, who I'll be speaking with later, I think today, is from Accenture. Accenture's an example of an organization that's helping their clients understand this quantum journey, and of course they're providing their own assets, if you will, but once again, taking advantage of the education that we're providing to them as a business partner. >> People talk about quantum being a decade away, but I think that's the wrong way to think about it, and I'd love your thoughts on this. It feels like, almost like the return coming out of COVID-19, it's going to come in waves, and there's parts that are going to be commercialized thoroughly and it's not binary. It's not like all of a sudden one day we're going to wake, "Hey, quantum is here!" It's really going to come in layers. Your thoughts? >> Yeah, I definitely agree with that. It's very important, that thought process because if you want to be competitive in your industry, you should think about getting started now. And that's why you see so many financial services, industrial firms, and others joining to really start experimentation around some of these domain areas to understand jointly how we evolve these algorithms to solve these problems. I think that the production level characteristics will curate the rate and pace of the industry. The industry, as we know, can drive things together faster. So together, we can make this a reality faster, and certainly none of us want to say it's going to be a decade, right. I mean, we're getting advantage today, in terms of the experimentation and the understanding of these problems, and we have to expedite that, I think, in the next few years. And certainly, with this arms race that we see, that's going to continue. One of the things I didn't mention is that IBM is also working with certain countries and we have significant agreements now with the countries of Germany and Japan to put quantum computers in an IBM facility in those countries. It's in collaboration with Fraunhofer Institute or miR Scientific Organization in Germany and with the University of Tokyo in Japan. So you can see that it's not only being pushed by industry, but it's also being pushed from the vantage of countries and bringing this research and technology to their countries. >> All right, Jamie, we're going to have to leave it there. Thanks so much for coming on theCUBE and give us the update. It's always great to see you. Hopefully, next time I see you, it'll be face to face. >> That's right, I hope so too. It's great to see you guys, thank you. Bye. >> All right, you're welcome. Keep it right there everybody. This is Dave Vellante for theCUBE. Be back right after this short break. (gentle music)

Published Date : May 5 2020

SUMMARY :

brought to you by IBM. the digital IBM thinking. We spoke to you last year at in the future with quantum. What are the things that you're trying of many of the things that you mentioned. things down to a single metric. interested in the ecosystem in the time that we find ourselves in. all over the place for this opportunity. Many of the startups are to their business down the road. just an example of the that actually adds to that and the bad outcomes associated of the other big guys? and the ability to leverage That's the sort of standard for some time. so that is the model that we're in today, in the sort of near to midterm, and subscribe to our YouTube channel. that are going to be One of the things I didn't It's always great to see you. It's great to see you guys, thank you. Be back right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Jamie ThomasPERSON

0.99+

JamiePERSON

0.99+

Fraunhofer InstituteORGANIZATION

0.99+

GermanyLOCATION

0.99+

University of New MexicoORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

Georgia TechORGANIZATION

0.99+

JPMCORGANIZATION

0.99+

First Transportation CompanyORGANIZATION

0.99+

five hoursQUANTITY

0.99+

DavePERSON

0.99+

JapanLOCATION

0.99+

AmgenORGANIZATION

0.99+

Delta AirlinesORGANIZATION

0.99+

BostonLOCATION

0.99+

Palo AltoLOCATION

0.99+

Anthem InsuranceORGANIZATION

0.99+

Monte CarloTITLE

0.99+

last yearDATE

0.99+

miR Scientific OrganizationORGANIZATION

0.99+

University of TokyoORGANIZATION

0.99+

53 qubitsQUANTITY

0.99+

Oak RidgeORGANIZATION

0.99+

last fallDATE

0.99+

YouTubeORGANIZATION

0.99+

oneQUANTITY

0.99+

COVID-19OTHER

0.99+

8000 different compoundsQUANTITY

0.99+

ENIACORGANIZATION

0.99+

over 200 papersQUANTITY

0.99+

trillions of dollarsQUANTITY

0.99+

53 qubitQUANTITY

0.99+

bothQUANTITY

0.98+

CESEVENT

0.98+

OneQUANTITY

0.98+

todayDATE

0.98+

single metricQUANTITY

0.97+

32QUANTITY

0.97+

firstQUANTITY

0.96+

FirstQUANTITY

0.96+

IBM ThinkORGANIZATION

0.95+

over 40 assetsQUANTITY

0.94+

twoQUANTITY

0.94+

IBM SystemsORGANIZATION

0.93+

over 140 billion quantum circuitsQUANTITY

0.93+

a yearQUANTITY

0.93+

last couple yearsDATE

0.92+

over 300,000 unique downloadsQUANTITY

0.92+

Oak Ridge National LaboratoryORGANIZATION

0.89+

one such metricQUANTITY

0.87+

nearly six monthsQUANTITY

0.87+

Keynote | Red Hat Summit 2019 | DAY 2 Morning


 

>> Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Paul Cormier. Boring. >> Welcome back to Boston. Welcome back. And welcome back after a great night last night of our opening with with Jim and talking to certainly saw ten Jenny and and especially our customers. It was so great last night to hear our customers in how they set their their goals and how they met their goals. All possible because certainly with a little help from red hat, but all possible because of because of open source. And, you know, sometimes we have to all due that has set goals. And I'm going to talk this morning about what we as a company and with community, have set for our goals along the way. And sometimes you have to do that. You know, audacious goals. It can really change the perception of what's even possible. And, you know, if I look back, I can't think of anything, at least in my lifetime, that's more important. Or such a big golden John F. Kennedy setting the gold to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about in compare and contrast, a little bit of where we are technically at that time, you know, tto win and to beat and winning the space race and even get into the space race. There was some really big technical challenges along the way. I mean, believe it or not. Not that long ago. But even But back then, math Malik mathematical calculations were being shifted from from brilliant people who we trusted, and you could look in the eye to A to a computer that was programmed with the results that were mostly printed out. This this is a time where the potential of computers was just really coming on the scene and, at the time, the space race at the time of space race it. It revolved around an IBM seventy ninety, which was one of the first transistor based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges And while we had the goal of in the beginning of the technique and the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case. Back in nineteen sixty nine, the seven individuals that made up the Mercury Space crew were putting their their lives in the hands of those first computers. But on Sunday, July twentieth, nineteen sixty nine, these things all came together. The goal, the technology in the team and a human being walked on the moon. You know, if this was possible fifty years ago, just think about what Khun B. Accomplished today, where technology is part of our everyday lives. And with technology advances at an ever increasing rate, it's hard to comprehend the potential that sitting right at our fingertips every single day, everything you know about computing is continuing to change. Today, let's look a bit it back. A computing In nineteen sixty nine, the IBM seventy ninety could process one hundred thousand floating point operations per second, today's Xbox one that sitting in most of your living rooms probably can process six trillion flops. That's sixty million times more powerful than the original seventy ninety that helped put a human being on the moon. And at the same time that computing was, that was drastically changed. That this computing has drastically changed. So have the boundaries of where that computing sits and where it's been where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center, and over time that grew to multiple data centers. Then with cloud, it extended all the way out to data centers that you didn't even own or have control of. But but computing now reaches far beyond any data center. This is also referred to as the edge. You hear a lot about that. The Apollo's, the Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed seventy pounds embedded in the capsule. Today, today the edge is right here on my wrist. This apple watch weighs just a couple of ounces, and it's ten ten thousand times more powerful than that seventy ninety back in nineteen sixty nine But even more impactful than computing advances, combined with the pervasive availability of it, are the changes and who in what controls those that similar to social changes that have happened along the way. Shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms. Operational control was your team, your team within your control? In some cases, a single person managed everything. But as complexity grows, our team's expanded, just like in the just like in the computing boundaries, system integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions going forward with the progress of things like a I and software defined everything. It's quite likely that machines will be managing machines, and in many cases that's already happening today. But while the technology at our finger tips today is so impressive, the pace of changing complexity of the problems we aspire to solve our equally hard to comprehend and they are all intertwined with one another learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity beyond anyone beyond what any one single company or even one single country Khun solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open sources so needed today, even in the world, to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction. Today is to bring two brother to bring together the best innovation from every corner of the planet. Toe fundamentally change how we solve problems. This approach and access the innovation is what has enabled open source To tackle The challenge is big challenges, like creating the hybrid cloud like building a truly open hybrid cloud. But even today it's really difficult to bridge the gap of the innovation. It's available in all in all of our fingertips by open source development, while providing the production level capabilities that are needed to really dip, ploy this in the enterprise and solve RIA world business problems. Red Hat has been committed to open source from the very, very beginning and bringing it to solve enterprise class problems for the last seventeen plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway tow harness the innovation. We had to fully embrace the model. We made a decision very early on. Give everything back and we live by that every single day. We didn't do crazy crazy things like you hear so many do out there. All this is open corps or everything below. The line is open and everything above the line is closed. We didn't do that, and we gave everything back Everything we learned in the process of becoming an enterprise class technology company. We gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible within open source development model we've been building on the foundation of open source is most successful Project Lennox in the architecture of the future hybrid and bringing them to the Enterprise. This is what made Red Hat, the company that we are today and red hats journey. But we also had the set goals, and and many of them seemed insert insurmountable at the time, the first of which was making Lennox the Enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the Enterprise was rail two dot one. Yes, I know we two dot one, but we knew we couldn't release a one dato product. We knew that and and we didn't. But >> we didn't want to >> allow any reason why anyone of any customer anyone shouldn't should look past rail to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and Big Eyes v partners that supported Rehl out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our on our priorities. In the early days of rail, I remember going to ask one of our engineers for a past rehl build because we were having a customer issue on it on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it here It is told me not to worry that the build this was he thinks this was the bill. This was the right one, and at that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. The not only convinced the world that Lennox was secure, stable, an enterprise ready, but also to make that a reality. But we did. And today this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise, Lennox has continually risen to the challenge and has become the core foundation that many mission critical customers run and bet their business on. And an even bigger today Lennox is the foundation of which practically every single technology initiative is built upon. Lennox is not only standard toe build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. We started our story with rail two dot one, and here we are today, seventeen years later, announcing rally as we did as we did last night. It's specifically designed for applications to run across the open hybrid. Clyde Cloud. Railed has become the best operating simp system for on premise all the way out to the cloud, providing that common operating model and workload foundation on which to build hybrid applications. Let's take it. Let's take a look at how far we've come and see this in action. >> Please welcome Red Hat Global director of developer experience, burst Sutter with Josh Boyer, Timothy Kramer, Lars Carl, it's Key and Brent Midwood. All right, we have some amazing things to show you. In just a few short moments, we actually have a lot of things to show you. And actually, Tim and Brandt will be with us momentarily. They're working out a few things in the back because we have a lot of this is gonna be a live demonstration, some incredible capabilities. Now you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vast cities. You're free to manage many, many machines. I want you thinking about that as we go to this process. Now, also, keep in mind that this is the basis our core platform for everything we do here. Red hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize the many of you in the audience right now. Her hand's on systems administrators, systems, architect, citizens, engineers. And we know that you're under ever growing pressure to deliver needed infrastructure. Resource is ever faster, and that is a key element to what you're thinking about every day. Well, this has been a core theme, and our design decisions find red Odd Enterprise Lennox eight and intelligent operating system, which is making it fundamentally easier for you manage machines that scale. So hold what you're about to see next. Feels like a new superpower and and that redhead azure force multiplier. So first, let me introduce you to a large. He's totally my limits guru. >> I wouldn't call myself a girl, but I I guess you could say that I want to bring Lennox and light meant to more people. >> Okay, Well, let's let's dive in. And we're not about the clinic's eight. >> Sure. Let me go. And Morgan, >> wait a >> second. There's windows. >> Yeah, way Build the weft Consul into Really? That means that for the first time, you can log in from any device including your phone or this standard windows laptop. So you just go ahead and and to my Saturday lance credentials here. >> Okay, so now >> you're putting >> your limits password and over the web. >> Yeah, that might sound a bit scary at first, but of course, we're using the latest security tech by T. L s on dh csp on. Because that's the standard Lennox off site. You can use everything that you used to like a stage keys, OTP, tokens and stuff like this. >> Okay, so now I see the council right here. I love the dashboard overview of the system, but what else can you tell us about this council? >> Right? Like right here. You see the load of the system, some some of its properties. But you can also dive into logs everything that you're used to from the command line, right? Or lookit, services. This's all the services I've running, can start and stuff them and enable >> OK, I love that feature right there. So what about if I have to add a whole new application to this environment? >> Good that you're bringing that up. We build a new future into hell called application streams. Which the way for you to install different versions of your half stack that are supported I'LL show you with Youngmin a command line. But since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the Web console Since the browser, I can even make this a bit bigger. Go to, for example, to see the application streams that we have for Poskus. Ijust do module list and I see you know we have ten and nine dot six Both supported tennis a default on defy enable ninety six Now the next time that I installed prescribes it will pull all their lady towards from them at six. >> Ok, so this is very cool. I see two verses of post Chris right here What tennis to default. That is fantastic and the application streams making that happen. But I'm really kind of curious, right? I loved using know js and Java. So what about multiple versions of those? >> Yeah, that's exactly the idea way. Want to keep up with the fast moving ecosystems off programming language? Isn't it a business? >> Okay, now, But I have another key question. I know some people were thinking it right now. What about Python? >> Yeah. In fact, in a minimum and still like this, python gives you command. Not fact. Just have to type it correctly. You can't just install which everyone you want two or three or whichever your application needs. >> Okay, Well, that is I've been burned on that one before. Okay, so no actual. Have a confession for all you guys. Right here. You guys keep this amongst yourselves. Don't let Paul No, I'm actually not a linnet systems administrator. I'm an application developer, an application architect, And I recently had to go figure out how to extend the file system. This is for real. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, extend resized to f s. And I have to admit, that's hard, >> right? I've opened the storage space for you right here, where you see an overview of your storage. And the council has made for people like you as well not only for people that I knew that when you two lunatics, right? It's if you're running, you're running some of the commands only, you know, some of the time you don't remember them. So, for example, I haven't felt twosome here. That's a little bit too small. Let me just throw it. It's like, you know, dragging this lighter. It calls all the command in the background for you. >> Oh, that is incredible. Is that simple? Just drag and drop. That is fantastic. Well, so I actually, you know, we'll have another question for you. It looks like now this linen systems administration is no longer a dark heart involving arcane commands typed into a black terminal. Like using when those funky ergonomic keyboards you know I'm talking about right? Do >> you know a lot of people, including me and people in the audience like that dark out right? And this is not taking any of that away. It's on additional tool to bring limits to more people. >> Okay, well, that is absolute fantastic. Thank you so much for that Large. And I really love him installing everything is so much easier, including a post gra seeker and, of course, the python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Lenox system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So, Josh, how would I create a golden image? One VM image that can use and we have everything pre baked in? >> Yeah, absolutely. But >> we get that question all the time. So really includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate new system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints or what we used to actually control it goes into our golden image. Uh, and I heard you and Lars talking about post present python. So I went and started typing here. So it brings us to this page, but you could go to the selected components, and you can see here I've created a blueprint that has all the python and post press packages in it. Ah, and the interesting thing about this is it build on our existing kickstart technology. But you can use it to deploy that whatever cloud you want. And it's saved so that you don't actually have to know all the various incantations from Amazon toe azure to Google, whatever it's all baked in on. When you do this, you can actually see the dependencies that get brought in as well. Okay. Should we create one life? Yes, please. All right, cool. So if we go back to the blueprints page and we click create blueprint Let's, uh let's make a developer brute blueprint here. So we click great, and you can see here on the left hand side. I've got all of my content served up by Red Hat satellite. We have a lot of great stuff, and really, But we can go ahead and search. So we'LL look for post grows and you know, it's a developer image at the client for some local testing. Um, well, come in here and at the python bits. Probably the development package. We need a compiler if we're going to actually build anything. So look for GCC here and hey, what's your favorite editor? >> A Max, Of course, >> Max. All right. Hey, Lars, about you. I'm more of a person. You Maxim v I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. But we're going to go ahead and Adam Ball, sweetie, I'm a fight on stage. So wait, just point and click. Let the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. >> Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily deploys of deploy this across multiple cloud providers. And as well as this on stage are where we have right now. >> Yeah, absolutely. We can to play on Amazon as your google any any infrastructure you're looking for so you can really hit your Clyburn hybrid cloud operating system images. >> Okay. All right, listen, we >> just go on, click, create image. Uh, we can select our different types here. I'm gonna go ahead and create a local VM because it's available image, and maybe they want to pass it around or whatever, and I just need a few moments for it to build. >> Okay? So while that's taking a few moments, I know there's another key question in the minds of the audience right now, and you're probably thinking I love what I see. What Right eye right hand Priceline say. But >> what does it >> take to upgrade from seven to eight? So large can you show us and walk us through an upgrade? >> Sure, this's my little Thomas Block that I set up. It's powered by what Chris and secrets over, but it's still running on seven six. So let's upgrade that jump over to my house fee on satellite on. You see all my relate machines here, including the one I showed you what Consul on before. And there is that one with my sun block and there's a couple others. Let me select those as well. This one on that one. Just go up here. Schedule remote job. And she was really great. And hit Submit. I made it so that it makes the booms national before. So if anything was wrong Kans throwback! >> Okay, okay, so now it's progressing. Here, >> it's progressing. Looks like it's running. Doing >> live upgrade on stage. Uh, >> seems like one is failing. What's going on here? Okay, we checked the tree of great Chuck. Oh, yeah, that's the one I was playing around with Butter fest backstage. What? Detective that and you know, it doesn't run the Afghan cause we don't support operating that. >> Okay, so what I'm hearing now? So the good news is, we were protected from possible failed upgrade there, So it sounds like these upgrades are perfectly safe. Aiken, basically, you know, schedule this during a maintenance window and still get some sleep. >> Totally. That's the idea. >> Okay, fantastic. All right. So it looks like upgrades are easy and perfectly safe. And I really love what you showed us there. It's good point. Click operation right from satellite. Ok, so Well, you know, we were checking out upgrades. I want to know Josh. How those v ems coming along. >> They went really well. So you were away for so long. I got a little bored and I took some liberties. >> What do you mean? >> Well, the image Bill And, you know, I decided I'm going to go ahead and deploy here to this Intel machine on stage Esso. I have that up and running in the web. Counsel. I built another one on the arm box, which is actually pretty fast, and that's up and running on this. Our machine on that went so well that I decided to spend up some an Amazon. So I've got a few instances here running an Amazon with the web console accessible there as well. On even more of our pre bill image is up and running an azure with the web console there. So the really cool thing about this bird is that all of these images were built with image builder in a single location, controlling all the content that you want in your golden images deployed across the hybrid cloud. >> Wow, that is fantastic. And you might think that so we actually have more to show you. So thank you so much for that large. And Josh, that is fantastic. Looks like provisioning bread. Enterprise Clinic Systems ate a redhead. Enterprise Enterprise. Rhetta Enterprise Lennox. Eight Systems is Asian ever before, but >> we have >> more to talk to you about. And there's one thing that many of the operations professionals in this room right now no, that provisioning of'em is easy, but it's really day two day three, it's down the road that those viens required day to day maintenance. As a matter of fact, several you folks right now in this audience to have to manage hundreds, if not thousands, of virtual machines I recently spoke to. Gentleman has to manage thirteen hundred servers. So how do you manage those machines? A great scale. So great that they have now joined us is that it looks like they worked things out. So now I'm curious, Tim. How will we manage hundreds, if not thousands, of computers? >> Welbourne, one human managing hundreds or even thousands of'em says, No problem, because we have Ansel automation. And by leveraging Ansel's integration into satellite, not only can we spin up those V em's really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory and his red hat is publishing patches. Weaken with that danceable integration easily apply those patches across our entire fleet of machines. Okay, >> that is fantastic. So he's all the machines can get updated in one fell swoop. >> He sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud that red hat dot com And here, a cloud that redhead dot com You can view and manage your entire inventory no matter where it sits. Of Redhead Enterprise Lennox like on Prem on stage. Private Cloud or Public Cloud. It's true Hybrid cloud management. >> OK, but one thing. One thing. I know that in the minds of the audience right now. And if you have to manage a large number servers this it comes up again and again. What happens when you have those critical vulnerabilities that next zero day CV could be tomorrow? >> Exactly. I've actually been waiting for a while patiently for you >> to get to the really good stuff. So >> there's one more thing that I wanted to let folks know about. Red Hat Enterprise. The >> next eight and some features that we have there. Oh, >> yeah? What is that? >> So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our red hat Enterprise Lennox servers running securely, inefficiently. And so what we actually have here is a few things that we could take a look at show folks what that is. >> OK, so we basically have this new feature. We're going to show people right now. And so one thing I want to make sure it's absolutely included within the redhead enterprise in that state. >> Yes. Oh, that's Ah, that's an announcement that we're making this week is that this is a brand new feature that's integrated with Red Hat Enterprise clinics, and it's available to everybody that has a red hat enterprise like subscription. So >> I believe everyone in this room right now has a rail subscriptions, so it's available to all of them. >> Absolutely, absolutely. So let's take a quick look and try this out. So we actually have. Here is a list of about six hundred rules. They're configuration security and performance rules. And this is this list is growing every single day, so customers can actually opt in to the rules that are most that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt into sending us. So customers have opted in and are sending us more data every single night. Then they actually have in total over the last twenty years via any other mechanism. >> Now there's I see now there's some critical findings. That's what I was talking about. But it comes to CVS and things that nature. >> Yeah, I'm betting that those air probably some of the rail seven boxes that we haven't actually upgraded quite yet. So we get back to that. What? I'd really like to show everybody here because everybody has access to this is how easy it is to opt in and enable this feature for real. Okay, let's do that real quick, so I gotta hop back over to satellite here. This is the satellite that we saw before, and I'll grab one of the hosts and we can use the new Web console feature that's part of Railly, and via single sign on I could jump right from satellite over to the Web console. So it's really, really easy. And I'LL grab a terminal here and registering with insights is really, really easy. Is one command troops, and what's happening right now is the box is going to gather some data. It's going to send it up to the cloud, and within just a minute or two, we're gonna have some results that we can look at back on the Web interface. >> I love it so it's just a single command and you're ready to register this box right now. That is super easy. Well, that's fantastic, >> Brent. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Lennox eight was the easiest, most economical and smartest operating system on the planet, period. And well, I think it's cute how you can go ahead and captain on a single machine. I'm going to show you one more thing. This is Answerable Tower. You can use as a bell tower to managing govern your answerable playbook, usage across your entire organization and with this. What I could do is on every single VM that was spun up here today. Opt in and register insights with a single click of a button. >> Okay, I want to see that right now. I know everyone's waiting for it as well, But hey, you're VM is ready. Josh. Lars? >> Yeah. My clock is running a little late now. Yeah, insights is a really cool feature >> of rail. And I've got it in all my images already. All >> right, I'm doing it all right. And so as this playbook runs across the inventory, I can see the machines registering on cloud that redhead dot com ready to be managed. >> OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, fantastic. >> That's awesome. Thanks to him. Nothing better than a Red Hat Summit speaker in the first live demo going off script deal. Uh, let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular deanna's mask issue. It's going to affect a couple of machines. We saw that in the overview, and I can actually go and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood an impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use answerable to mediate this so I can grab the machines will select those two and we're mediate with answerable. I can create a new playbook. It's our maintenance window, but we'LL do something along the lines of like stuff Tim broke and that'LL be our cause. We name it whatever we want. So we'Ll create that playbook and take a look at it, and it's actually going to give us some details about the machines. You know what, what type of reboots Efendi you're going to be needed and what we need here. So we'LL go ahead and execute the playbook and what you're going to see is the outputs goingto happen in real time. So this is happening from the cloud were affecting machines. No matter where they are, they could be on Prem. They could be in a hybrid cloud, a public cloud or in a private cloud. And these things are gonna be remediated very, very easily with answerable. So it's really, really awesome. Everybody here with a red hat. Enterprise licks Lennox subscription has access to this now, so I >> kind of want >> everybody to go try this like, we really need to get this thing going and try it out right now. But >> don't know, sent about the room just yet. You get stay here >> for okay, Mr. Excitability, I think after this keynote, come back to the red hat booth and there's an optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out. Answer any questions you might have. So >> this is really the start of a new era with an intelligent operating system and beauty with intelligence you just saw right now what insights that troubles you. Fantastic. So we're enabling systems administrators to manage more red in private clinics, a greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point, and we kind of, you know, when a little bit sideways here moments. But we need to get off the stage. But there's one thing I want you guys to think about it. All right? Do come check out the in the booth. Like Tim just said also in our debs, Get hands on red and a prize winning state as well. But really, I want you to think about this one human and a multitude of servers. And if you remember that one thing asked you upfront. Do you feel like you get a new superpower and redhead? Is your force multiplier? All right, well, thank you so much. Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on stage. >> I went brilliant. No, it's just as always, >> amazing. I mean, as you can tell from last night were really, really proud of relate in that coming out here at the summit. And what a great way to showcase it. Thanks so much to you. Birth. Thanks, Brent. Tim, Lars and Josh. Just thanks again. So you've just seen this team demonstrate how impactful rail Khun b on your data center. So hopefully hopefully many of you. If not all of you have experienced that as well. But it was super computers. We hear about that all the time, as I just told you a few minutes ago, Lennox isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest super computers in the world. In our next guest is here to tell us a lot more about that. >> Please welcome Lawrence Livermore National Laboratory. HPC solution Architect Robin Goldstone. >> Thank you so much, Robin. >> So welcome. Welcome to the summit. Welcome to Boston. And thank thank you so much for coming for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high high performance computing really works at this level? >> Sure. So Lawrence Livermore National >> Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science and technology and high performance computing has always been one of our core capabilities. In fact, our very first supercomputer, ah Univac one was ordered by Edward Teller before our lab even opened back in nineteen fifty two. Our mission has evolved since then to cover a broad range of national security challenges. But first and foremost, our job is to ensure the safety, security and reliability of the nation's nuclear weapons stockpile. Oh, since the US no longer performs underground nuclear testing, our ability to certify the stockpile depends heavily on science based science space methods. We rely on H P C to simulate the behavior of complex weapons systems to ensure that they can function as expected, well beyond their intended life spans. That's actually great. >> So are you really are still running on that on that Univac? >> No, Actually, we we've moved on since then. So Sierra is Lawrence Livermore. Its latest and greatest supercomputer is currently the Seconds spastic supercomputer in the world and for the geeks in the audience, I think there's a few of them out there. We put up some of the specs of Syrah on the screen behind me, a couple of things worth highlighting our Sierra's peak performance and its power utilisation. So one hundred twenty five Pata flops of performance is equivalent to about twenty thousand of those Xbox one excess that you mentioned earlier and eleven point six megawatts of power required Operate Sierra is enough to power around eleven thousand homes. Syria is a very large and complex system, but underneath it all, it starts out as a collection of servers running Lin IX and more specifically, rail. >> So did Lawrence. Did Lawrence Livermore National Lab National Lab used Yisrael before >> Sierra? Oh, yeah, most definitely. So we've been running rail for a very long time on what I'll call our mid range HPC systems. So these clusters, built from commodity components, are sort of the bread and butter of our computer center. And running rail on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware. Also between Lawrence Livermore in our sister labs, Los Alamos and Sandia. Alongside these commodity clusters, though, we've always had one sort of world class supercomputer like Sierra. Historically, these systems have been built for a sort of exotic proprietary hardware running entirely closed source operating systems. Anytime something broke, which was often the Vander would be on the hook to fix it. And you know, >> that sounds >> like a good model, except that what we found overtime is most the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified codes. So their ability to reproduce our problem was was pretty limited. In some cases, they've even sent an engineer on site to try to reproduce our problems. But even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. >> So for many of us, for many of us, the challenges is one of driving reasons for open source, you know, for even open source existing. How has how did Sierra change? Things are on open source for >> you. Sure. So when we developed our technical requirements for Sierra, we had an explicit requirement that we want to run an open source operating system and a strong preference for rail. At the time, IBM was working with red hat toe add support Terrell for their new little Indian power architecture. So it was really just natural for them to bid a red. A rail bay system for Sierra running Raylan Cyril allows us to leverage the model that's worked so well for us for all this time on our commodity clusters any packages that we build for X eighty six, we can now build those packages for power as well as our market texture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in house colonel developers to help debug complex problems are sys. Admin is Khun now work on any of our systems, including Sierra, without having toe pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo es fenders. >> You know, you've been able, you've been able to extend your foundation from all the way from X eighty six all all the way to the extract excess Excuse scale supercomputing. We talk about giving customers all we talked about it all the time. A standard operational foundation to build upon. This isn't This isn't exactly what we've envisioned. So So what's next for you >> guys? Right. So what's next? So Sierra's just now going into production. But even so, we're already working on the contract for our next supercomputer called El Capitan. That's scheduled to be delivered the Lawrence Livermore in the twenty twenty two twenty timeframe. El Capitan is expected to be about ten times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation. That relish provided us for well over a decade. >> Well, thank you so much for your support of realm over the years, Robin. And And thank you so much for coming and tell us about it today. And we can't wait to hear more about El Capitan. Thank you. Thank you very much. So now you know why we're so proud of realm. And while you saw confetti cannons and T shirt cannons last night, um, so you know, as as burned the team talked about the demo rail is the force multiplier for servers. We've made Lennox one of the most powerful platforms in the history of platforms. But just as Lennox has become a viable platform with access for everyone, and rail has become viable, more viable every day in the enterprise open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with Ralph seeing the incredible progress of software development occurring around Lennox. Let's let's lead us to the next goal that we said tow, tow ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many? How many of you out here in the audience or are Cesar are? HC sees how many out there a lot. A lot. You are the people that our building the next generation of computing the hybrid cloud, you know, again with like just like our goals around Lennox. This goals might seem a little daunting in the beginning, but as a community we've proved it time and time again. We are unstoppable. Let's talk a bit about what got us to the point we're at right right now and in the work that, as always, we still have in front of us. We've been on a decade long mission on this. Believe it or not, this mission was to build the capabilities needed around the Lenox operating system to really build and make the hybrid cloud. When we saw well, first taking hold in the enterprise, we knew that was just taking the first step. Because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build, to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of J. Boss many years ago, all the way to the new Cuban Eddie's native code ready workspaces we launched just a few months back. We realized very early on that building a developer friendly platform was critical to the success of Lennox and open source in the enterprise. Shortly after this, the public cloud stormed onto the scene while our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. Rehl very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely re sought and re architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running in how they were running, regardless of which infrastructure provider they they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms to cloud forms, insights and now answerable. We focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked toe applications. Enterprise class applications tend to create tons of data and to have a common operating platform foyer applications. You need a storage solutions. That's Justus, flexible as that platform able to run on premise. Just a CZ. Well, as in the cloud, even across multiple clouds. This let us tow acquisitions like bluster, SEF perma bitch in Nubia, complimenting our Pratt platform with red hat storage for us, even though this sounds very condensed, this was a decade's worth of investment, all in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer would depend on to deploy riel hybrid cloud architectures, finding any finding an amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions. When that necessarily wasn't available by twenty fourteen, our foundation had expanded, but one big challenge remained workload portability. Virtual machine formats were fragmented across the various deployments and higher level framework such as Java e still very much depended on a significant amount of operating system configuration and then containers happened containers, despite having a very long being in existence for a very long time. As a technology exploded on the scene in twenty fourteen, Cooper Netease followed shortly after in twenty fifteen, allowing containers to span multiple locations and in one fell swoop containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the on ly practical reality in way for customers and a red hat. We've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you both our customers and our partners building critical realm in open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've and while we've amassed a pretty big amount of expertise to solve most any challenge in in any area that stack, it takes more than just our own learning's to build the next generation platform. Today we're also introducing open shit for which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply just with our customers in mind. This is something that could only be possible in an open source development model and just like relish the force multiplier for servers. Open shift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them its scale. And we've also announced open shift, and we've also announced azure open shift. Last night. Satya on this stage talked about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud, regardless of whether you run it yourself or just consume it as a service. And with this flagship release, we are also introducing operators, which is the central, which is the central feature here. We talked about this work last year with the operator framework, and today we're not going to just show you today. We're not going to just show you open shift for we're going to show you operators running at scale operators that will do updates and patches for you, letting you focus more of your time and running your infrastructure and running running your business. We want to make all this easier and intuitive. So let's have a quick look at how we're doing. Just that >> painting. I know all of you have heard we're talking to pretend to new >> customers about the travel out. So new plan. Just open it up as a service been launched by this summer. Look, I know this is a big quest for not very big team. I'm open to any and all ideas. >> Please welcome back to the stage. Red Hat Global director of developer Experience burst Sutter with Jessica Forrester and Daniel McPherson. All right, we're ready to do some more now. Now. Earlier we showed you read Enterprise Clinic St running on lots of different hardware like this hardware you see right now And we're also running across multiple cloud providers. But now we're going to move to another world of Lennox Containers. This is where you see open shift four on how you can manage large clusters of applications from eggs limits containers across the hybrid cloud. We're going to see this is where suffer operators fundamentally empower human operators and especially make ups and Deb work efficiently, more efficiently and effectively there together than ever before. Rights. We have to focus on the stage right now. They're represent ops in death, and we're gonna go see how they reeled in application together. Okay, so let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today, and he's telling my ops, comfort person Let's go to call him Mr Ops. So Dan, >> thanks for with open before, we had a much easier time setting up in maintaining our clusters. In large part, that's because open shit for has extended management of the clusters down to the infrastructure, the diversity kinds of parent. When you take >> a look at the open ship console, >> you can now see the machines that make up the cluster where machine represents the infrastructure. Underneath that Cooper, Eddie's node open shit for now handles provisioning Andy provisioning of those machines. From there, you could dig into it open ship node and see how it's configured and monitor how it's behaving. So >> I'm curious, >> though it does this work on bare metal infrastructure as well as virtualized infrastructure. >> Yeah, that's right. Burn So Pa Journal nodes, no eternal machines and open shit for can now manage it all. Something else we found extremely useful about open ship for is that it now has the ability to update itself. We can see this cluster hasn't update available and at the press of a button. Upgrades are responsible for updating. The entire platform includes the nodes, the control plane and even the operating system and real core arrests. All of this is possible because the infrastructure components and their configuration is now controlled by technology called operators. Thes software operators are responsible for aligning the cluster to a desired state. And all of this makes operational management of unopened ship cluster much simpler than ever before. All right, I >> love the fact that all that's been on one console Now you can see the full stack right all way down to the bare metal right there in that one console. Fantastic. So I wanted to scare us for a moment, though. And now let's talk to Deva, right? So Jessica here represents our all our developers in the room as my facts. He manages a large team of developers here Red hat. But more importantly, she represents our vice president development and has a large team that she has to worry about on a regular basis of Jessica. What can you show us? We'LL burn My team has hundreds of developers and were constantly under pressure to deliver value to our business. And frankly, we can't really wait for Dan and his ops team to provisioned the infrastructure and the services that we need to do our job. So we've chosen open shift as our platform to run our applications on. But until recently, we really struggled to find a reliable source of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us install through the cluster. But now, with operator, How bio, we're really seeing the V ecosystem be unlocked. And the technology's there. Things that my team needs, its databases and message cues tracing and monitoring. And these operators are actually responsible for complex applications like Prometheus here. Okay, they're written in a variety of languages, danceable, but that is awesome. So I do see a number of options there already, and preaches is a great example. But >> how do you >> know that one? These operators really is mature enough and robust enough for Dan and the outside of the house. Wilbert, Here we have the operator maturity model, and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application over time through different versions or all the way out to full auto pilot, where it's automatically scaling and tuning the application based on the current environment. And it's very cool. So coming over toothy open shift Consul, now we can actually see Dan has made the sequel server operator available to me and my team. That's the database that we're using. A sequel server. That's a great example. So cynics over running here in the cluster? But this is a great example for a developer. What if I want to create a new secret server instance? Sure, we're so it's as easy as provisioning any other service from the developer catalog. We come in and I can type for sequel server on what this is actually creating is, ah, native resource called Sequel Server, and you can think of that like a promise that a sequel server will get created. The operator is going to see that resource, install the application and then manage it over its life cycle, KAL, and from this install it operators view, I can see the operators running in my project and which resource is its managing Okay, but I'm >> kind of missing >> something here. I see this custom resource here, the sequel server. But where the community's resource is like pods. Yeah, I think it's cool that we get this native resource now called Sequel Server. But if I need to, I can still come in and see the native communities. Resource is like your staple set in service here. Okay, that is fantastic. Now, we did say earlier on, though, like many of our customers in the audience right now, you have a large team of engineers. Lost a large team of developers you gotta handle. You gotta have more than one secret server, right? We do one for every team as we're developing, and we use a lot of other technologies running on open shift as well, including Tomcat and our Jenkins pipelines and our dough js app that is gonna actually talk to that sequel server database. Okay, so this point we can kind of provisions, Some of these? Yes. Oh, since all of this is self service for me and my team's, I'm actually gonna go and create one of all of those things I just said on all of our projects, right Now, if you just give me a minute, Okay? Well, right. So basically, you're going to knock down No Jazz Jenkins sequel server. All right, now, that's like hundreds of bits of application level infrastructure right now. Live. So, Dan, are you not terrified? Well, I >> guess I should have done a little bit better >> job of managing guests this quota and historically just can. I might have had some conflict here because creating all these new applications would admit my team now had a massive back like tickets to work on. But now, because of software operators, my human operators were able to run our infrastructure at scale. So since I'm long into the cluster here as the cluster admin, I get this view of pods across all projects. And so I get an idea of what's happening across the entire cluster. And so I could see now we have four hundred ninety four pods already running, and there's a few more still starting up. And if I scroll to the list, we can see the different workloads Jessica just mentioned of Tomcats. And no Gs is And Jenkins is and and Siegel servers down here too, you know, I see continues >> creating and you have, like, close to five hundred pods running >> there. So, yeah, filters list down by secret server, so we could just see. Okay, But >> aren't you not >> running going around a cluster capacity at some point? >> Actually, yeah, we we definitely have a limited capacity in this cluster. And so, luckily, though, we already set up auto scale er's And so because the additional workload was launching, we see now those outer scholars have kicked in and some new machines are being created that don't yet have noticed. I'm because they're still starting up. And so there's another good view of this as well, so you can see machine sets. We have one machine set per availability zone, and you could see the each one is now scaling from ten to twelve machines. And the way they all those killers working is for each availability zone, they will. If capacities needed, they will add additional machines to that availability zone and then later effect fast. He's no longer needed. It will automatically take those machines away. >> That is incredible. So right now we're auto scaling across multiple available zones based on load. Okay, so looks like capacity planning and automation is fully, you know, handle this point. But I >> do have >> another question for year logged in. Is the cluster admin right now into the console? Can you show us your view of >> operator suffer operators? Actually, there's a couple of unique views here for operators, for Cluster admits. The first of those is operator Hub. This is where a cluster admin gets the ability to curate the experience of what operators are available to users of the cluster. And so obviously we already have the secret server operator installed, which which we've been using. The other unique view is operator management. This gives a cluster I've been the ability to maintain the operators they've already installed. And so if we dig in and see the secret server operator, well, see, we haven't set up for manual approval. And what that means is if a new update comes in for a single server, then a cluster and we would have the ability to approve or disapprove with that update before installs into the cluster, we'LL actually and there isn't upgrade that's available. Uh, I should probably wait to install this, though we're in the middle of scaling out this cluster. And I really don't want to disturb Jessica's application. Workflow. >> Yeah, so, actually, Dan, it's fine. My app is already up. It's running. Let me show it to you over here. So this is our products application that's talking to that sequel server instance. And for debugging purposes, we can see which version of sequel server we're currently talking to. Its two point two right now. And then which pod? Since this is a cluster, there's more than one secret server pod we could be connected to. Okay, I could see right there the bounder screeners they know to point to. That's the version we have right now. But, you know, >> this is kind of >> point of software operators at this point. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. Let's do it. Live here on stage. Right, then. All >> right. All right. I could see where this is going. So whenever you updated operator, it's just like any other resource on communities. And so the first thing that happens is the operator pot itself gets updated so we actually see a new version of the operator is currently being created now, and what's that gets created, the overseer will be terminated. And that point, the new, softer operator will notice. It's now responsible for managing lots of existing Siegel servers already in the environment. And so it's then going Teo update each of those sickle servers to match to the new version of the single server operator and so we could see it's running. And so if we switch now to the all projects view and we filter that list down by sequel server, then we should be able to see us. So lots of these sickle servers are now being created and the old ones are being terminated. So is the rolling update across the cluster? Exactly a So the secret server operator Deploy single server and an H A configuration. And it's on ly updates a single instance of secret server at a time, which means single server always left in nature configuration, and Jessica doesn't really have to worry about downtime with their applications. >> Yeah, that's awesome dance. So glad the team doesn't have to worry about >> that anymore and just got I think enough of these might have run by Now, if you try your app again might be updated. >> Let's see Jessica's application up here. All right. On laptop three. >> Here we go. >> Fantastic. And yet look, we're We're into two before we're onto three. Now we're on to victory. Excellent on. >> You know, I actually works so well. I don't even see a reason for us to leave this on manual approval. So I'm going to switch this automatic approval. And then in the future, if a new single server comes in, then we don't have to do anything, and it'll be all automatically updated on the cluster. >> That is absolutely fantastic. And so I was glad you guys got a chance to see that rolling update across the cluster. That is so cool. The Secret Service database being automated and fully updated. That is fantastic. Alright, so I can see how a software operator doesn't able. You don't manage hundreds if not thousands of applications. I know a lot of folks or interest in the back in infrastructure. Could you give us an example of the infrastructure >> behind this console? Yeah, absolutely. So we all know that open shift is designed that run in lots of different environments. But our teams think that as your redhead over, Schiff provides one of the best experiences by deeply integrating the open chief Resource is into the azure console, and it's even integrated into the azure command line toll and the easy open ship man. And, as was announced yesterday, it's now available for everyone to try out. And there's actually one more thing we wanted to show Everyone related to open shit, for this is all so new with a penchant for which is we now have multi cluster management. This gives you the ability to keep track of all your open shift environments, regardless of where they're running as well as you can create new clusters from here. And I'll dig into the azure cluster that we were just taking a look at. >> Okay, but is this user and face something have to install them one of my existing clusters? >> No, actually, this is the host of service that's provided by Red hat is part of cloud that redhead that calm and so all you have to do is log in with your red hair credentials to get access. >> That is incredible. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red update. Right and red embers. Thank Satan. Now we see it for multi cluster management. But home shift so you can fundamentally see. Now the suffer operators do finally change the game when it comes to making human operators vastly more productive and, more importantly, making Devon ops work more efficiently together than ever before. So we saw the rich ice vehicle system of those software operators. We can manage them across the Khyber Cloud with any, um, shift instance. And more importantly, I want to say Dan and Jessica for helping us with this demonstration. Okay, fantastic stuff, guys. Thank you so much. Let's get Paul back out here >> once again. Thanks >> so much to burn his team. Jessica and Dan. So you've just seen how open shift operators can help you manage hundreds, even thousands of applications. Install, upgrade, remove nodes, control everything about your application environment, virtual physical, all the way out to the cloud making, making things happen when the business demands it even at scale, because that's where it's going to get. Our next guest has lots of experience with demand at scale. and they're using open source container management to do it. Their work, their their their work building a successful cloud, First platform and there, the twenty nineteen Innovation Award winner. >> Please welcome twenty nineteen Innovation Award winner. Cole's senior vice president of technology, Rich Hodak. >> How you doing? Thanks. >> Thanks so much for coming out. We really appreciate it. So I guess you guys set some big goals, too. So can you baby tell us about the bold goal? Helped you personally help set for Cole's. And what inspired you to take that on? Yes. So it was twenty seventeen and life was pretty good. I had no gray hair and our business was, well, our tech was working well, and but we knew we'd have to do better into the future if we wanted to compete. Retails being disrupted. Our customers are asking for new experiences, So we set out on a goal to become an open hybrid cloud platform, and we chose Red had to partner with us on a lot of that. We set off on a three year journey. We're currently in Year two, and so far all KP eyes are on track, so it's been a great journey thus far. That's awesome. That's awesome. So So you Obviously, Obviously you think open source is the way to do cloud computing. So way absolutely agree with you on that point. So So what? What is it that's convinced you even more along? Yeah, So I think first and foremost wait, do we have a lot of traditional IAS fees? But we found that the open source partners actually are outpacing them with innovation. So I think that's where it starts for us. Um, secondly, we think there's maybe some financial upside to going more open source. We think we can maybe take some cost out unwind from these big fellas were in and thirdly, a CZ. We go to universities. We started hearing. Is we interviewed? Hey, what is Cole's doing with open source and way? Wanted to use that as a lever to help recruit talent. So I'm kind of excited, you know, we partner with Red Hat on open shift in in Rail and Gloucester and active M Q and answerable and lots of things. But we've also now launched our first open source projects. So it's really great to see this journey. We've been on. That's awesome, Rich. So you're in. You're in a high touch beta with with open shift for So what? What features and components or capabilities are you most excited about and looking forward to what? The launch and you know, and what? You know what? What are the something maybe some new goals that you might be able to accomplish with with the new features. And yeah, So I will tell you we're off to a great start with open shift. We've been on the platform for over a year now. We want an innovation award. We have this great team of engineers out here that have done some outstanding work. But certainly there's room to continue to mature that platform. It calls, and we're excited about open shift, for I think there's probably three things that were really looking forward to. One is we're looking forward to, ah, better upgrade process. And I think we saw, you know, some of that in the last demo. So upgrades have been kind of painful up until now. So we think that that that will help us. Um, number two, A lot of our open shift workloads today or the workloads. We run an open shifts are the stateless apse. Right? And we're really looking forward to moving more of our state full lapse into the platform. And then thirdly, I think that we've done a great job of automating a lot of the day. One stuff, you know, the provisioning of, of things. There's great opportunity o out there to do mohr automation for day two things. So to integrate mohr with our messaging systems in our database systems and so forth. So we, uh we're excited. Teo, get on board with the version for wear too. So, you know, I hope you, Khun, we can help you get to the next goals and we're going to continue to do that. Thank you. Thank you so much rich, you know, all the way from from rail toe open shift. It's really exciting for us, frankly, to see our products helping you solve World War were problems. What's you know what? Which is. Really? Why way do this and and getting into both of our goals. So thank you. Thank you very much. And thanks for your support. We really appreciate it. Thanks. It has all been amazing so far and we're not done. A critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure. We've been helping our customers do that in these environments. For almost twenty years now, we've been running the most complex work loads in the world. But you know, while the public cloud has opened up tremendous possibilities, it also brings in another type of another layer of infrastructure complexity. So what's our next goal? Extend your extend your data center all the way to the edge while being as effective as you have been over the last twenty twenty years, when it's all at your own fingertips. First from a practical sense, Enterprises air going to have to have their own data centers in their own environment for a very long time. But there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge. In fact, we talked about that very early on how technology advances in computer networking is storage are changing the physical boundaries of the data center every single day. The need, the need to process data at the source is becoming more and more critical. New use cases Air coming up every day. Self driving cars need to make the decisions on the fly. In the car factory processes are using a I need to adapt in real time. The factory floor has become the new edge of the data center, working with things like video analysis of a of A car's paint job as it comes off the line, where a massive amount of data is on ly needed for seconds in order to make critical decisions in real time. If we had to wait for the video to go up to the cloud and back, it would be too late. The damage would have already been done. The enterprise is being stretched to be able to process on site, whether it's in a car, a factory, a store or in eight or nine PM, usually involving massive amounts of data that just can't easily be moved. Just like these use cases couldn't be solved in private cloud alone because of things like blatant see on data movement, toe address, real time and requirements. They also can't be solved in public cloud alone. This is why open hybrid is really the model that's needed in the only model forward. So how do you address this class of workload that requires all of the above running at the edge? With the latest technology all its scale, let me give you a bit of a preview of what we're working on. We are taking our open hybrid cloud technologies to the edge, Integrated with integrated with Aro AM Hardware Partners. This is a preview of a solution that will contain red had open shift self storage in K V M virtual ization with Red Hat Enterprise Lennox at the core, all running on pre configured hardware. The first hardware out of the out of the gate will be with our long time. Oh, am partner Del Technologies. So let's bring back burn the team to see what's right around the corner. >> Please welcome back to the stage. Red Hat. Global director of developer Experience burst Sutter with Kareema Sharma. Okay, We just how was your Foreign operators have redefined the capabilities and usability of the open hybrid cloud, and now we're going to show you a few more things. Okay, so just be ready for that. But I know many of our customers in this audience right now, as well as the customers who aren't even here today. You're running tens of thousands of applications on open chef clusters. We know that disappearing right now, but we also know that >> you're not >> actually in the business of running terminators clusters. You're in the business of oil and gas from the business retail. You're in a business transportation, you're in some other business and you don't really want to manage those things at all. We also know though you have lo latest requirements like Polish is talking about. And you also dated gravity concerns where you >> need to keep >> that on your premises. So what you're about to see right now in this demonstration is where we've taken open ship for and made a bare metal cluster right here on this stage. This is a fully automated platform. There is no underlying hyper visor below this platform. It's open ship running on bare metal. And this is your crew vanities. Native infrastructure, where we brought together via mes containers networking and storage with me right now is green mush arma. She's one of her engineering leaders responsible for infrastructure technologies. Please welcome to the stage, Karima. >> Thank you. My pleasure to be here, whether it had summit. So let's start a cloud. Rid her dot com and here we can see the classroom Dannon Jessica working on just a few moments ago From here we have a bird's eye view ofthe all of our open ship plasters across the hybrid cloud from multiple cloud providers to on premises and noticed the spare medal last year. Well, that's the one that my team built right here on this stage. So let's go ahead and open the admin console for that last year. Now, in this demo, we'LL take a look at three things. A multi plaster inventory for the open Harbor cloud at cloud redhead dot com. Second open shift container storage, providing convert storage for virtual machines and containers and the same functionality for cloud vert and bare metal. And third, everything we see here is scuba unit is native, so by plugging directly into communities, orchestration begin common storage. Let working on monitoring facilities now. Last year, we saw how continue native actualization and Q Bert allow you to run virtual machines on Cabinet is an open shift, allowing for a single converge platform to manage both containers and virtual machines. So here I have this dark net project now from last year behead of induced virtual machine running it S P darknet application, and we had started to modernize and continue. Arise it by moving. Parts of the application from the windows began to the next containers. So let's take a look at it here. I have it again. >> Oh, large shirt, you windows. Earlier on, I was playing this game back stage, so it's just playing a little solitaire. Sorry about that. >> So we don't really have time for that right now. Birds. But as I was saying, Over here, I have Visions Studio Now the window's virtual machine is just another container and open shift and the i d be service for the virtual machine. It's just another service in open shift open shifts. Running both containers and virtual machines together opens a whole new world of possibilities. But why stop there? So this here be broadened to come in. It is native infrastructure as our vision to redefine the operation's off on premises infrastructure, and this applies to all matters of workloads. Using open shift on metal running all the way from the data center to the edge. No by your desk, right to main benefits. Want to help reduce the operation casts And second, to help bring advance good when it is orchestration concept to your infrastructure. So next, let's take a look at storage. So open shift container storage is software defined storage, providing the same functionality for both the public and the private lads. By leveraging the operator framework, open shift container storage automatically detects the available hardware configuration to utilize the discs in the most optimal vein. So then adding my note, you don't have to think about how to balance the storage. Storage is just another service running an open shift. >> And I really love this dashboard quite honestly, because I love seeing all the storage right here. So I'm kind of curious, though. Karima. What kind of storage would you What, What kind of applications would you use with the storage? >> Yeah, so this is the persistent storage. To be used by a database is your files and any data from applications such as a Magic Africa. Now the A Patrick after operator uses school, been at this for scheduling and high availability, and it uses open shift containers. Shortest. Restore the messages now Here are on premises. System is running a caf co workload streaming sensor data on DH. We want toe sort it and act on it locally, right In a minute. A place where maybe we need low latency or maybe in a data lake like situation. So we don't want to send the starter to the cloud. Instead, we want to act on it locally, right? Let's look at the griffon a dashboard and see how our system is doing so with the incoming message rate of about four hundred messages for second, the system seems to be performing well, right? I want to emphasize this is a fully integrated system. We're doing the testing An optimization sze so that the system can Artoo tune itself based on the applications. >> Okay, I love the automated operations. Now I am a curious because I know other folks in the audience want to know this too. What? Can you tell us more about how there's truly integrated communities can give us an example of that? >> Yes. Again, You know, I want to emphasize everything here is managed poorly by communities on open shift. Right. So you can really use the latest coolest to manage them. All right. Next, let's take a look at how easy it is to use K native with azure functions to script alive Reaction to a live migration event. >> Okay, Native is a great example. If actually were part of my breakout session yesterday, you saw me demonstrate came native. And actually, if you want to get hands on with it tonight, you can come to our guru night at five PM and actually get hands on like a native. So I really have enjoyed using K. Dated myself as a software developer. And but I am curious about the azure functions component. >> Yeah, so as your functions is a function is a service engine developed by Microsoft fully open source, and it runs on top of communities. So it works really well with our on premises open shift here. Right now, I have a simple azure function that I already have here and this azure function, you know, Let's see if this will send out a tweet every time we live My greater Windows virtual machine. Right. So I have it integrated with open shift on DH. Let's move a note to maintenance to see what happens. So >> basically has that via moves. We're going to see the event triggered. They trigger the function. >> Yeah, important point I want to make again here. Windows virtue in machines are equal citizens inside of open shift. We're investing heavily in automation through the use of the operator framework and also providing integration with the hardware. Right, So next, Now let's move that note to maintain it. >> But let's be very clear here. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. This is open ship running on bear. Meddle with these bare metal host. >> That is absolutely right. The system can automatically discover the bare metal hosts. All right, so here, let's move this note to maintenance. So I start them Internets now. But what will happen at this point is storage will heal itself, and communities will bring back the same level of service for the CAFTA application by launching a part on another note and the virtual machine belive my great right and this will create communities events. So we can see. You know, the events in the event stream changes have started to happen. And as a result of this migration, the key native function will send out a tweet to confirm that could win. It is native infrastructure has indeed done the migration for the live Ian. Right? >> See the events rolling through right there? >> Yeah. All right. And if we go to Twitter? >> All right, we got tweets. Fantastic. >> And here we can see the source Nord report. Migration has succeeded. It's a pretty cool stuff right here. No. So we want to bring you a cloud like experience, but this means is we're making operational ease a fuse as a top goal. We're investing heavily in encapsulating management knowledge and working to pre certify hardware configuration in working with their partners such as Dell, and they're dead already. Note program so that we can provide you guidance on specific benchmarks for specific work loads on our auto tuning system. >> All right, well, this is tow. I know right now, you're right thing, and I want to jump on the stage and check out the spare metal cluster. But you should not right. Wait After the keynote didn't. Come on, check it out. But also, I want you to go out there and think about visiting our partner Del and their booth where they have one. These clusters also. Okay, So this is where vmc networking and containers the storage all come together And a Kurban in his native infrastructure. You've seen right here on this stage, but an agreement. You have a bit more. >> Yes. So this is literally the cloud coming down from the heavens to us. >> Okay? Right here, Right now. >> Right here, right now. So, to close the loop, you can have your plaster connected to cloud redhead dot com for our insights inside reliability engineering services so that we can proactively provide you with the guidance through automated analyses of telemetry in logs and help flag a problem even before you notice you have it Beat software, hardware, performance, our security. And one more thing. I want to congratulate the engineers behind the school technology. >> Absolutely. There's a lot of engineers here that worked on this cluster and worked on the stack. Absolutely. Thank you. Really awesome stuff. And again do go check out our partner Dale. They're just out that door I can see them from here. They have one. These clusters get a chance to talk to them about how to run your open shift for on a bare metal cluster as well. Right, Kareema, Thank you so much. That was totally awesome. We're at a time, and we got to turn this back over to Paul. >> Thank you. Right. >> Okay. Okay. Thanks >> again. Burned, Kareema. Awesome. You know, So even with all the exciting capabilities that you're seeing, I want to take a moment to go back to the to the first platform tenant that we learned with rail, that the platform has to be developer friendly. Our next guest knows something about connecting a technology like open shift to their developers and part of their company. Wide transformation and their ability to shift the business that helped them helped them make take advantage of the innovation. Their Innovation award winner this year. Please, Let's welcome Ed to the stage. >> Please welcome. Twenty nineteen. Innovation Award winner. BP Vice President, Digital transformation. Ed Alford. >> Thanks, Ed. How your fake Good. So was full. Get right into it. What we go you guys trying to accomplish at BP and and How is the goal really important in mandatory within your organization? Support on everyone else were global energy >> business, with operations and over seventy countries. Andi. We've embraced what we call the jewel challenge, which is increasing the mind for energy that we have as individuals in the world. But we need to produce the energy with fuel emissions. It's part of that. One of our strategic priorities that we >> have is to modernize the whole group on. That means simplifying our processes and enhancing >> productivity through digital solutions. So we're using chlo based technologies >> on, more importantly, open source technologies to clear a community and say, the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the jewel challenge and actually try and help solve that problem. That's great. So So how did these heart of these new ways of working benefit your team and really the entire organ, maybe even the company as a whole? So we've been given the Innovation Award for Digital conveyor both in the way it was created and also in water is delivering a couple of guys in the audience poll costal and brewskies as he they they're in the team. Their teams developed that convey here, using our jail and Dev ops and some things. We talk about this stuff a lot, but actually the they did it in a truly our jail and develops we, um that enabled them to experiment and walking with different ways. And highlight in the skill set is that we, as a group required in order to transform using these approaches, we can no move things from ideation to scale and weeks and days sometimes rather than months. Andi, I think that if we can take what they've done on DH, use more open source technology, we contain that technology and apply across the whole group to tackle this Jill challenge. And I think that we use technologists and it's really cool. I think that we can no use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way. So So what's the next step for you guys at BP? So moving forward, we we are embracing ourselves, bracing a clothed, forced organization. We need to continue to live to deliver on our strategy, build >> over the technology across the entire group to address the jewel >> challenge and continue to make some of these bold changes and actually get into and really use. Our technology is, I said, too addresses you'LL challenge and make the future of our planet a better place for ourselves and our children and our children's children. That's that's a big goal. But thank you so much, Ed. Thanks for your support. And thanks for coming today. Thank you very much. Thank you. Now comes the part that, frankly, I think his best part of the best part of this presentation We're going to meet the type of person that makes all of these things a reality. This tip this type of person typically works for one of our customers or with one of with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about Maura the way more in the >> week. I think the thing I like most about it is you feel that reward Just helping people I mean and helping people with stuff you enjoy right with computers. My dad was the math and science teacher at the local high school. And so in the early eighties, that kind of met here, the default person. So he's always bringing in a computer stuff, and I started a pretty young age. What Jason's been able to do here is Mohr evangelize a lot of the technologies between different teams. I think a lot of it comes from the training and his certifications that he's got. He's always concerned about their experience, how easy it is for them to get applications written, how easy it is for them to get them up and running at the end of the day. We're a loan company, you know. That's way we lean on accounting like red. That's where we get our support front. That's why we decided to go with a product like open shift. I really, really like to product. So I went down. The certification are out in the training ground to learn more about open shit itself. So my daughter's teacher, they were doing a day of coding, and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class. The people that we have on our teams, like Jason, are what make us better than our competitors, right? Anybody could buy something off the shelf. It's people like him. They're able to take that and mold it into something that then it is a great offering for our partners and for >> customers. Please welcome Red Hat Certified Professional of the Year Jason Hyatt. >> Jason, Congratulations. Congratulations. What a what a big day, huh? What a really big day. You know, it's great. It's great to see such work, You know that you've done here. But you know what's really great and shows out in your video It's really especially rewarding. Tow us. And I'm sure to you as well to see how skills can open doors for for one for young women, like your daughters who already loves technology. So I'd liketo I'd like to present this to you right now. Take congratulations. Congratulations. Good. And we I know you're going to bring this passion. I know you bring this in, everything you do. So >> it's this Congratulations again. Thanks, Paul. It's been really exciting, and I was really excited to bring my family here to show the experience. It's it's >> really great. It's really great to see him all here as well going. Maybe we could you could You guys could stand up. So before we leave before we leave the stage, you know, I just wanted to ask, What's the most important skill that you'LL pass on from all your training to the future generations? >> So I think the most important thing is you have to be a continuous learner you can't really settle for. Ah, you can't be comfortable on learning, which I already know. You have to really drive a continuous Lerner. And of course, you got to use the I ninety. Maxwell. Quite. >> I don't even have to ask you the question. Of course. Right. Of course. That's awesome. That's awesome. And thank you. Thank you for everything, for everything that you're doing. So thanks again. Thank you. You know what makes open source work is passion and people that apply those considerable talents that passion like Jason here to making it worked and to contribute their idea there. There's back. And believe me, it's really an impressive group of people. You know you're family and especially Berkeley in the video. I hope you know that the redhead, the certified of the year is the best of the best. The cream of the crop and your dad is the best of the best of that. So you should be very, very happy for that. I also and I also can't wait. Teo, I also can't wait to come back here on this stage ten years from now and present that same award to you. Berkeley. So great. You should be proud. You know, everything you've heard about today is just a small representation of what's ahead of us. We've had us. We've had a set of goals and realize some bold goals over the last number of years that have gotten us to where we are today. Just to recap those bold goals First bait build a company based solely on open source software. It seems so logical now, but it had never been done before. Next building the operating system of the future that's going to run in power. The enterprise making the standard base platform in the op in the Enterprise Olympics based operating system. And after that making hybrid cloud the architecture of the future make hybrid the new data center, all leading to the largest software acquisition in history. Think about it around us around a company with one hundred percent open source DNA without. Throughout. Despite all the fun we encountered over those last seventeen years, I have to ask, Is there really any question that open source has won? Realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day in the Red Hat was born. But we only got to that goal because of you. Many of you contributors, many of you knew toe open source software and willing to take the risk along side of us and many of partners on that journey, both inside and outside of Red Hat. Going forward with the reach of IBM, Red hat will accelerate. Even Mohr. This will bring open source general innovation to the next generation hybrid data center, continuing on our original mission and goal to bring open source technology toe every corner of the planet. What I what I just went through in the last hour Soul, while mind boggling to many of us in the room who have had a front row seat to this overto last seventeen plus years has only been red hats. First step. Think about it. We have brought open source development from a niche player to the dominant development model in software and beyond. Open Source is now the cornerstone of the multi billion dollar enterprise software world and even the next generation hybrid act. Architecture would not even be possible without Lennox at the core in the open innovation that it feeds to build around it. This is not just a step forward for software. It's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined. We have. We have witnessed open source accomplished in the last seventeen years more than what most people will see in their career. Or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future. And in the one last thing to say, it's everybody in this room and beyond. Everyone outside continue the mission. Thanks have a great sum. It's great to see it

Published Date : May 11 2019

SUMMARY :

Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Kennedy setting the gold to the American people to go to the moon. that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. So it is an honor for me to be able to show it to you live on stage today. And we're not about the clinic's eight. And Morgan, There's windows. That means that for the first time, you can log in from any device Because that's the standard Lennox off site. I love the dashboard overview of the system, You see the load of the system, some some of its properties. So what about if I have to add a whole new application to this environment? Which the way for you to install different versions of your half stack that That is fantastic and the application streams Want to keep up with the fast moving ecosystems off programming I know some people were thinking it right now. everyone you want two or three or whichever your application needs. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, I've opened the storage space for you right here, where you see an overview of your storage. you know, we'll have another question for you. you know a lot of people, including me and people in the audience like that dark out right? much easier, including a post gra seeker and, of course, the python that we saw right there. Yeah, absolutely. And it's saved so that you don't actually have to know all the various incantations from Amazon I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily so you can really hit your Clyburn hybrid cloud operating system images. and I just need a few moments for it to build. So while that's taking a few moments, I know there's another key question in the minds of the audience right now, You see all my relate machines here, including the one I showed you what Consul on before. Okay, okay, so now it's progressing. it's progressing. live upgrade on stage. Detective that and you know, it doesn't run the Afghan cause we don't support operating that. So the good news is, we were protected from possible failed upgrade there, That's the idea. And I really love what you showed us there. So you were away for so long. So the really cool thing about this bird is that all of these images were built So thank you so much for that large. more to talk to you about. I'm going to show you here a satellite inventory and his So he's all the machines can get updated in one fell swoop. And there's one thing that I want to bring your attention to today because it's brand new. I know that in the minds of the audience right now. I've actually been waiting for a while patiently for you to get to the really good stuff. there's one more thing that I wanted to let folks know about. next eight and some features that we have there. So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate OK, so we basically have this new feature. So And this is this list is growing every single day, so customers can actually opt in to the rules that are most But it comes to CVS and things that nature. This is the satellite that we saw before, and I'll grab one of the hosts and I love it so it's just a single command and you're ready to register this box right now. I'm going to show you one more thing. I know everyone's waiting for it as well, But hey, you're VM is ready. Yeah, insights is a really cool feature And I've got it in all my images already. the machines registering on cloud that redhead dot com ready to be managed. OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, We saw that in the overview, and I can actually go and get some more details about what this everybody to go try this like, we really need to get this thing going and try it out right now. don't know, sent about the room just yet. And even though it's really easy to get going on and we kind of, you know, when a little bit sideways here moments. I went brilliant. We hear about that all the time, as I just told Please welcome Lawrence Livermore National Laboratory. And thank thank you so much for coming for But first and foremost, our job is to ensure the safety, and for the geeks in the audience, I think there's a few of them out there. before And you know, Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified open source, you know, for even open source existing. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo all the way to the extract excess Excuse scale supercomputing. share any more details about that system right now, but we are hoping that we're going to be able of the data center spread across so many multiple environments, management had to be I know all of you have heard we're talking to pretend to new customers about the travel out. Earlier we showed you read Enterprise Clinic St running on lots of In large part, that's because open shit for has extended management of the clusters down to the infrastructure, you can now see the machines that make up the cluster where machine represents the infrastructure. Thes software operators are responsible for aligning the cluster to a desired state. of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us has made the sequel server operator available to me and my team. Okay, so this point we can kind of provisions, And if I scroll to the list, we can see the different workloads Jessica just mentioned Okay, But And the way they all those killers working is Okay, so looks like capacity planning and automation is fully, you know, handle this point. Is the cluster admin right now into the console? This gives a cluster I've been the ability to maintain the operators they've already installed. So this is our products application that's talking to that sequel server instance. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. And that point, the new, softer operator will notice. So glad the team doesn't have to worry about that anymore and just got I think enough of these might have run by Now, if you try your app again Let's see Jessica's application up here. And yet look, we're We're into two before we're onto three. So I'm going to switch this automatic approval. And so I was glad you guys got a chance to see that rolling update across the cluster. And I'll dig into the azure cluster that we were just taking a look at. all you have to do is log in with your red hair credentials to get access. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red Thanks so much to burn his team. of technology, Rich Hodak. How you doing? center all the way to the edge while being as effective as you have been over of the open hybrid cloud, and now we're going to show you a few more things. You're in the business of oil and gas from the business retail. And this is your crew vanities. Well, that's the one that my team built right here on this stage. Oh, large shirt, you windows. open shift container storage automatically detects the available hardware configuration to What kind of storage would you What, What kind of applications would you use with the storage? four hundred messages for second, the system seems to be performing well, right? Now I am a curious because I know other folks in the audience want to know this too. So you can really use the latest coolest to manage And but I am curious about the azure functions component. and this azure function, you know, Let's see if this will We're going to see the event triggered. So next, Now let's move that note to maintain it. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. You know, the events in the event stream changes have started to happen. And if we go to Twitter? All right, we got tweets. No. So we want to bring you a cloud like experience, but this means is I want you to go out there and think about visiting our partner Del and their booth where they have one. Right here, Right now. So, to close the loop, you can have your plaster connected to cloud redhead These clusters get a chance to talk to them about how to run your open shift for on a bare metal Thank you. rail, that the platform has to be developer friendly. Please welcome. What we go you guys trying to accomplish at BP and and How is the goal One of our strategic priorities that we have is to modernize the whole group on. So we're using chlo based technologies And highlight in the skill part of this presentation We're going to meet the type of person that makes And so in the early eighties, welcome Red Hat Certified Professional of the Year Jason Hyatt. So I'd liketo I'd like to present this to you right now. to bring my family here to show the experience. before we leave before we leave the stage, you know, I just wanted to ask, What's the most important So I think the most important thing is you have to be a continuous learner you can't really settle for. And in the one last thing to say, it's everybody in this room and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adam BallPERSON

0.99+

JessicaPERSON

0.99+

Josh BoyerPERSON

0.99+

PaulPERSON

0.99+

Timothy KramerPERSON

0.99+

DanPERSON

0.99+

JoshPERSON

0.99+

JimPERSON

0.99+

TimPERSON

0.99+

IBMORGANIZATION

0.99+

JasonPERSON

0.99+

Lars CarlPERSON

0.99+

Kareema SharmaPERSON

0.99+

WilbertPERSON

0.99+

Jason HyattPERSON

0.99+

BrentPERSON

0.99+

LenoxORGANIZATION

0.99+

Rich HodakPERSON

0.99+

Ed AlfordPERSON

0.99+

tenQUANTITY

0.99+

Brent MidwoodPERSON

0.99+

Daniel McPhersonPERSON

0.99+

Jessica ForresterPERSON

0.99+

LennoxORGANIZATION

0.99+

LarsPERSON

0.99+

Last yearDATE

0.99+

RobinPERSON

0.99+

DellORGANIZATION

0.99+

KarimaPERSON

0.99+

hundredsQUANTITY

0.99+

seventy poundsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

John F. KennedyPERSON

0.99+

AnselORGANIZATION

0.99+

oneQUANTITY

0.99+

Edward TellerPERSON

0.99+

last yearDATE

0.99+

TeoPERSON

0.99+

KareemaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

todayDATE

0.99+

PythonTITLE

0.99+

seven individualsQUANTITY

0.99+

BPORGANIZATION

0.99+

ten ten thousand timesQUANTITY

0.99+

BostonLOCATION

0.99+

ChrisPERSON

0.99+

Del TechnologiesORGANIZATION

0.99+

pythonTITLE

0.99+

TodayDATE

0.99+

thousandsQUANTITY

0.99+

Robin GoldstonePERSON

0.99+

Jamie Thomas, IBM | IBM Think 2019


 

>> Live from San Francisco. It's theCube covering IBM Think 2019. Brought to you by IBM. >> Welcome back to Moscone Center everybody. The new, improved Moscone Center. We're at Moscone North, stop by and see us. I'm Dave Vellante, he's Stu Miniman and Lisa Martin is here as well, John Furrier will be up tomorrow. You're watching theCube, the leader in live tech coverage. This is day zero essentially, Stu, of IBM Think. Day one, the big keynotes, start tomorrow. Chairman's keynote in the afternoon. Jamie Thomas is here. She's the general manager of IBM's Systems Strategy and Development at IBM. Great to see you again Jamie, thanks for coming on. >> Great to see you guys as usual and thanks for coming back to Think this year. >> You're very welcome. So, I love your new role. You get to put on the binoculars sometimes the telescope. Look at the road map. You have your fingers in a lot of different areas and you get some advanced visibility on some of the things that are coming down the road. So we're really excited about that. But give us the update from a year ago. You guys have been busy. >> We have been busy, and it was a phenomenal year, Dave and Stu. Last year, I guess one of the pinnacles we reached is that we were named with our technology, our technology received the number one and two supercomputer ratings in the world and this was a significant accomplishment. Rolling out the number one supercomputer in Oakridge National Laboratory and the number two supercomputer in Lawrence Livermore Laboratory. And Summit as it's called in Oakridge is really a cool system. Over 9000 CPUs about 27,000 GPUs. It does 200 petaflops at peak capacity. It has about 250 petabytes of storage attached to it at scale and to cool this guy, Summit, I guess it's a guy. I'm not sure of the denomination actually it takes about 4,000 gallons of water per minute to cool the supercomputer. So we're really pleased with the engineering that we worked on for so many years and achieving these World records, if you will, for both Summit and Sierra. >> Well it's not just bragging rights either, right, Jamie? I mean, it underscores the technical competency and the challenge that you guys face I mean, you're number one and number two, that's not easy. Not easy to sustain of course, you got to do it again. >> Right, right, it's not easy. But the good thing is the design point of these systems is that we're able to take what we created here from a technology perspective around POWER9 and of course the patnership we did with Invidia in this case and the software storage. And we're able to downsize that significantly for commercial clients. So this is the world's largest artificial intlligence supercomputer and basically we are able to take that technology that we invented in this case 'cause they ended up being one of our first clients albeit a very large client, and use that across industries to serve the needs of artificial intelligence work loads. So I think that was one of the most significant elements of what we actually did here. >> And IBM has maintained, despite you guys selling off your microelectronics division years ago, you've maintained a lot of IP in the core processing and the design. You've also reached out certainly with open power, for example, to folks. You mentioned Invidia. But having that, sort of embracing that alternative processor mode as opposed to trying to jam everything in the die. Different philosophy that IBM is taking. >> Yeah we think that the workload specific processing is still very much in demand. Workloads are going to have different dimensions and that's what we really have focused on here. I don't think that this has really changed over the last decades of computing and so we're really focused on specialized computing purpose-built computing, if you will. Obviously using that on premise and also using that in our hybrid cloud strategies for clients that want to do that as well. >> What are some of the other cool things that you guys are working on that you can talk about. >> Well I would say last year was quite an interesting year in that from a mainframe perspective we delivered our first 19 inch form factor which allows us to fit nicely on a floor tile. Obviously allows clients to scale more effectively from a data center planning perspective. Allows us to have a cloud footprint, but with all the characteristics of security that you would normally expect in a mainframe system. But really tailored toward new workloads once again. So Linux form factor and going after the new workloads that a lot of these cloud data centers really need. One of our first and foremost focus areas continues to be security around that system and tomorrow there will be some announcements that will happen around Z security. I can't say what they are right now but you'll see that we are extending security in new ways to support more of these hybrid cloud scenarios. >> It's so funny. We were talking in one of our earlier segments talking about how the path of virtualization and trying to get lots of workloads into something and goes back to the device that could manage all workloads which was the Mainframe. So we've watched for many years system Z lots of Linux on there if you want to do some cool container, you know global Z that's an option, so it's interesting to watch while the pendulum swings in IT have happened the Z system has kept up with a lot of these innovations that have been going on in the industry. >> And you're right, one of our big focuses for the platform for Z and power of course is a container-based strategy. So we've created, you know last year we talked about secure container technology and we continue to evolve secure container technology but the idea is we want to eliminate any kind of friction from a developer's perspective. So if you want to design in a container-based environment then you're more easily able to port that technology or your applications, if you will to a Z mainframe environment if that's really what your target environment is. So that's been a huge focus. The other of course major invention that we announced at the Consumer Electronics show is our Quantum System One. And this represented an evolution of our Quantum system over the last year where we now have the world's really first self-contained universal quantum computer in a single form factor where we were able to combine the Quantum processor which is living in the dilution refrigerator. You guys remember the beautiful chandelier from last year. I think it's back this year. But this is all self-contained with it's electronics in a single form factor. And that really represents the evolution of the electronics in particular over the last year where we were able to miniaturize those electronics and get them into this differentiated form factor. >> What should people know about Quantum? When you see the demos, they explain it's not a binary one or zero, it could be either, a virtually infinite set of possibilities, but what should the lay person know about Quantum and try to understand? >> Well I think really the fundamental aspect of it is in today's world with traditional computers they're very powerful but they cannot solve certain problems. So when you look at areas like material science, areas like chemistry even some financial trading scenarios, the problems can either not be solved at all or they cannot be completed in the right amount of time. Particularly in the world of financial services. But in the area of chemistry for instance molecular modeling. Today we can model simple molecules but we cannot model something even as complex as caffeine. We simply don't have the traditional compute capacity to do that. A quantum computer will allow us once it comes to maturity allow us to solve these problems that are not solvable today and you can think about all the things that we could do if were able to have more sophisticated molecular modeling. All the kinds of problems we could solve probably in the world of pharmacology, material science which affects many, many industries right? People that are developing automobiles, people that are exploring for oil. All kinds of opportunities here in this space. The technology is a little bit spooky, I guess, that's what Einstein said when he first solved some of this, right? But it really represents the state of the universe, right? How the universe behaves today. It really is happening around us but that's what quantum mechanics helps us capture and when combined with IT technology the quantum computer can bring this to life over time. >> So one of the things that people point to is potentially a new security paradigm because Quantum can flip the way in which we do security on it's head so you got to be thinking around that as well. I know security is something that is very important to IBM's Systems division. >> Right, absolutely. So the first thing that happens when someone hears about quantum computing is they ask about quantum security. And as you can imagine there's a lot of clients here that are concerned about security. So in IBM research we're also working on quantum-safe encryption. So you got one team working on a quantum computer, you got another team ensuring that the data will be protected from the quantum computer. So we do believe we can construct quantum-safe encryption algorithms based on lattice-based technology that will allow us to encrypt data today and in the future when the quantum computer does reach that kind of capacity the data will be protected. So the idea is that we would start using these new algorithms far earlier than the computer could actually achieve this result but it would mean that data created today would be quantum safe in the future. >> You're kind of in your own arm's race internally. >> But it's very important. Both aspects are very important. To be able to solve these problems that we can't solve today, which is really amazing, right? And to also be able to protect our data should it be used in inappropriate ways, right? >> Now we had Ed Bausch on earlier today. Used to run the storage division. What's going on in that world? I know you've got your hands in that pie as well. What can you tell us about what's going on there? >> Well I believe that Ed and the team have made some phenomenal innovations in the past year around flash MVME technology and fusing that across product lines state-of-the-art. The other area that I think is particularly interesting of course is their data management strategy around things like Spectrum Discover. So, today we all know that many of our clients have just huge amounts of data. I visited a client last year that interesting enough had 1 million tapes, and of course we sell tapes so that's a good thing but then how do you deal and manage all the data that is on 1 million tapes. So one of the inventions that the team has worked on is a metadata tagging capability that they've now shipped in a product called spectrum discover. And that allows a client to have a better way to have a profile of their data, data governance and understand for different use cases like data governance or compliance how do they pull back the right data and what does this data really mean to them. So have a better lexicon of their data, if you will than what they can do in today's world. So I think that's very important technology. >> That's interesting. I would imagine that metadata could sit in Flash somewhere and then inform the serial technology to maybe find stuff faster. I mean, everybody thinks tape is slow because it's sequential. But actually if you do some interesting things with metadata you can-- >> There's all kinds of things you can do I mean it's one thing to have a data ocean if you will, but then how do you really get value out of that data over a long period of time and I think we're just the tip of the spear in understanding the use cases that we can use this technology for. >> Jamie, how does IBM manage that pipeline of innovation. I think we heard very specific examples of how the super computers drive HPC architectures which everybody is going to use for their AI infrastructure. Something like quantum computing is a little bit more out there. So how do you balance kind of the research through the product and what's going to be more useful to users today. >> Yeah, well, that's an interesting question. So IBM is one of the few organizations in the world really that have an applied research organization still. And Dario Gil is here this week he manages our research organization now under Arvind Krishna. An organization like IBM Systems has a great relationship with research. Research are the folks that had people working on Quantum for decades, right? And they're the reason that we are in a position now to be able to apply this in the way that we are. The great news is that along the way we're always working on a pipeline of this next generation set of technologies and innovations. Some of them succeed and some of them don't. But without doing that we would not have things like Quantum. We would not have advanced encryption capability that we pushed all the way down into our chips. We would not have quantum-safe encryption. Things like the metadata tagging that I talked about came out of IBM research. So it's working with them on problems that we see coming down the pipe, if you will that will affect our clients and then working with them to make sure we get those into the product lines at the right amount of time. I would say that Quantum is the ultimate partnership between IBM Systems and IBM research. We have one team in this case that are working jointly on this product. Bringing the skills to bear that each of us have on this case with them having the quantum physics experts and us having the electronics experts and of course the software stacks spanning both organizations is really a great partnership. >> Is there anything you could tell us about what's going on at the edge. The edge computing you hear a lot about that today. IBM's got some activities going on there? You haven't made huge splashes there but anything going on in research that you can share with us, or any directions. >> Well I believe the edge is going to be a practical endeavor for us and what I mean by that is there are certain use cases that I think we can serve very well. So if we look at the edge as perhaps a factory environment, we are seeing opportunities for our storaging compute solutions around the data management out in some of these areas. If you look at the self-driving automobile for instance, just design something like that can easily take over a hundred petabytes of data. So being able to manage the data at the edge, being able to then to provide insight appropriately using AI technologies is something we think we can do and we see that. I own factories based on what I do and I'm starting to use AI technology. I use Power AI technology in my factories for visual inspection. Think about a lot of the challenges around provenance of parts as well as making sure that they're finally put together in the right way. Using these kind of technologies in factories is just really an easy use case that we can see. And so what we anticipate is we will work with the other parts of IBM that are focused on edge as well and understand which areas we think our technology can best serve. >> That's interesting you mention visual inspection. That's an analog use case which now you're transforming into digital. >> Yeah well Power AI vision has been very successful in the last year . So we had this power AI package of open source software that we pulled together but we drastically simplified the use of this software, if you will the ability to use it deploy it and we've added vision capability to it in the last year. And there's many use cases for this vision capability. If you think about even the case where you have a patient that is in an MRI. If you're able to decrease the amount of time they stay in the MRI in some cases by less fidelity of the picture but then you've got to be able to interpret it. So this kind of AI and then extensions of AI to vision is really important. Another example for Power AI vision is we're actually seeing use cases in advertising so the use case of maybe you're at a sporting event or even a busy place like this where you're able to use visual inspection techniques to understand the use of certain products. In the case of a sporting event it's how many times did my logo show up in this sporting event, right? Particularly our favorite one is Formula One which we usually feature the Formula One folks here a little bit at the events. So you can see how that kind of technology can be used to help advertisers understand the benefits in these cases. >> Got it. Well Jamie we always love having you on because you have visibility into so many different areas. Really thank you for coming and sharing a little taste of what's to come. Appreciate it. >> Well thank you. It's always good to see you and I know it will be an exciting week here. >> Yeah, we're very excited. Day zero here, day one and we're kicking off four days of coverage with theCube. Jamie Thomas of IBM. I'm Dave Vellante, he's Stu Miniman. We'll be right back right after this short break from IBM Think in Moscone. (upbeat music)

Published Date : Feb 12 2019

SUMMARY :

Brought to you by IBM. She's the general manager of IBM's Systems Great to see you on some of the things that the pinnacles we reached and the challenge that you guys face and of course the patnership we did in the core processing and the design. over the last decades of computing on that you can talk about. that you would normally that have been going on in the industry. And that really represents the the things that we could do So one of the things that So the idea is that we would start using You're kind of in your that we can't solve today, hands in that pie as well. that the team has worked on But actually if you do the use cases that we can the super computers in the way that we are. research that you can share Well I believe the edge is going to be That's interesting you the use of this software, if you will Well Jamie we always love having you on It's always good to see you days of coverage with theCube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Jamie ThomasPERSON

0.99+

Lisa MartinPERSON

0.99+

JamiePERSON

0.99+

EinsteinPERSON

0.99+

Dario GilPERSON

0.99+

Stu MinimanPERSON

0.99+

San FranciscoLOCATION

0.99+

John FurrierPERSON

0.99+

DavePERSON

0.99+

Last yearDATE

0.99+

last yearDATE

0.99+

TodayDATE

0.99+

StuPERSON

0.99+

200 petaflopsQUANTITY

0.99+

IBM SystemsORGANIZATION

0.99+

last yearDATE

0.99+

InvidiaORGANIZATION

0.99+

1 million tapesQUANTITY

0.99+

MosconeLOCATION

0.99+

OakridgeLOCATION

0.99+

tomorrowDATE

0.99+

eachQUANTITY

0.99+

one teamQUANTITY

0.99+

this yearDATE

0.99+

Arvind KrishnaPERSON

0.99+

a year agoDATE

0.99+

Both aspectsQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

Over 9000 CPUsQUANTITY

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.98+

day oneQUANTITY

0.98+

both organizationsQUANTITY

0.98+

about 27,000 GPUsQUANTITY

0.98+

first 19 inchQUANTITY

0.98+

SummitORGANIZATION

0.98+

LinuxTITLE

0.97+

about 250 petabytesQUANTITY

0.97+

past yearDATE

0.97+

Day zeroQUANTITY

0.96+

over a hundred petabytesQUANTITY

0.96+

Moscone NorthLOCATION

0.95+

SierraORGANIZATION

0.95+

single form factorQUANTITY

0.95+

Moscone CenterLOCATION

0.94+

first clientsQUANTITY

0.93+

decadesQUANTITY

0.93+

this weekDATE

0.93+

OneQUANTITY

0.93+

Ed BauschPERSON

0.93+

Tim Kelton, Descartes Labs | Google Cloud Next 2018


 

>> Live from San Francisco, it's The Cube, covering Google Cloud Next 2018. Brought to you by, Google Cloud and its ecosystem partners. >> Hello everyone, welcome back this is The Cube, live in San Francisco for Google Cloud's big event. It's called Google Next for 2018, it's their big cloud show. They're showcasing all their hot technology. A lot of breaking news, a lot of new tech, a lot of new announcements, of course we're bringing it here for three days of wall-to-wall coverage live. It's day two, our next guest is Tim Kelton, co-founder of Descartes Labs, doing some amazing work with imagery and data science, AI, TensorFlow, using the Google Cloud platform to analyze nearly 15 petabytes of data. Tim, welcome to The Cube. >> Thanks, great to be here >> Thanks for coming on. So we were just geeking out before we came on camera of the app that you have, really interesting stuff you guys got going on. Again, really cool, before we get into some of the tech, talk to me about Descartes Labs, you're co-founder, where did it come from? How did it start? And what are some of the projects that you guys are working on? >> I think, therefore I am. >> Exactly, exactly. Yeah, so we're a little different story than maybe a normal start-up. I was actually at a national research laboratory, Los Alamos National Laboratory, and there was a team of us that were focused on machine learning and using datasets, like remotely sensing the Earth with satellite and aerial imagery. And we were working on that from around 2008 to 2014 and then we saw just this explosion in things like, use cases for machine learning and applying that to real world use cases. But then, at the same time, there was this explosion in cloud computing and how much data you could store and train and things like that. So we started the company in late 2014 and now here we are today, we have around 80 employees. >> And what's the main thing you guys do from a data standpoint, where does the data come from? Take a minute to explain that. >> Yeah, so we focus on kind of a lot of often geospatial-centric data, but a lot of satellite and aerial imagery. A lot of what we call remote sensing, sensors orbiting the Earth or at low aerial over the Earth. All different modalities, such as different bands of light, different radio frequencies, all of those types of things. And then we fuse them together and have them in our models. And what we've seen is there's not just the magic data set that gives you the pure answer, right? It's fusing of a lot of these data sets together to tell you what's happening and then building models to predict how those changes affect our customers, their businesses, their supply chain, all those types of things. >> Let's talk about, I want to riff on something real quick, I know I want to get to some of the tech in a second. But my kids and I talk about this all the time, I got four kids and they're now, two in high school, two in college and they see Uber. And they see Uber remapping New York City every five minutes with the data that they get from the GPS. And we started riffing on drones and self-driving cars or aerial cars, if we want to fly in the air with automated helicopters or devices, you got to have some sort of coordinate system. We need this geospatial, and so, I know it's fantasy now, but what you guys are kind of getting at could be an indicator of the kind of geospatial work that's coming down later. Right now there's some cool things happening but you'd need kind of a name space or coordinates so you don't bump into something or are these automated drones don't fly near airports, or cell towers, or windmills, wind farms. >> Yeah, and those are the types of problems we solve or we look to solve, change is happening over time. Often it's the temporal cadence that's almost the key indicator in seeing how things are actually changing over time. And people are coming to us and saying, "Can you quantify that?" We've done things like agriculture and looking at crops grown, look at every single farm all over the whole U.S. and then build that into our models and say how much corn is grown at this field? And then test it back over the last 15 years and then say, as we get new imagery coming in, just daily flooding in through our Cloud Native platform, then just rerunning those models and saying, are we producing more today or less today? >> And then how is that data used, for example, take the agriculture example and that's used to say, okay, this region is maybe more productive than this region? Is it because of weather? Is it because of other things that they're doing? >> You can go back through all different types of use cases, everything from maybe if you're insuring that crop, you would might want to know if that's flooded more on the left side of the road or the right side of the road, as a predictive indicator. You might say, this is looking like a drought year. How have we done in drought years of 2007 and-- >> You look at irrigation trends. >> And you were talking off-camera about the ground truth, can you use IOT to actually calibrate the ground truth? >> Yeah and that's the sensor infusion we're seeing, everywhere around us we're seeing just floods and floods of sensors, so we have the sensors above the Earth looking down, but then as you have more and more sensors on the ground, that's the set of ground truth that you can train and calibrate. You could go back and train and train over again. It's a lot harder problem than, is this a cat or a dog? >> Yeah that's why I was riffing on the concept of a name space, the developer concept around, this is actually space. If you want to have flying drones deliver packages to transportation, you're going to need to know, some sort of triangulation, know what to do. But I got to ask you a question, so what are some of the problems that you're asked to look at, now that you have, you have the top-down view geospace, you got some ground truth sensor exploding in with more and more devices at the network, as a instrument anywhere it can have the IP or whatnot. What are some of the problems that you guys get asked to look at, you mentioned the agriculture, what else are you guys solving? >> Any sort of land use or land classification, or facilities and facility monitoring. It could be any sort of physical infrastructure that you're wanting to quantify and predict how those changes over time might impact that business vertical. And they're really varied, they're everything from energy and agriculture, and real estate, and things like that. Just last Friday, I was talking with, we have a two parts to our company. We have from the tech side, we have the engineering side which is normal engineering, but then we also have this applied science, where we have a team of scientists that are trying to build models often for our customers. 'Cause they're not, this is geospatial and machine learning, that's a rare breed of person. >> You don't want to cross pollinate. >> Yeah, and that's just not everywhere. Not all of our customers have that type of individual. But they were telling me, they were looking at the hurricane season coming up this Fall, and they had a building detector and they can detect all the buildings. So in just a couple hours, they ran that over all of the state of Florida and identified every building in the whole state of Florida. So now, as the seasons come in, they have a way to track that. >> They can be proactive and notify someone, hey you're building might need some boards on it or some sort of risk. >> Yeah and the last couple years look at all the weather events. In California we've had droughts and fires, but then you have flooding and things like that. And you're even able to start taking new types of sensors that are coming out, like the European Space Agency has a sensor that we ingest and it does synthetic aperture radar, where it's sending a radar signal down to the Earth and capturing it. So you can do things like water levels in reservoirs and things like that. >> And look at irrigation for farming, where is the droughts going to be? Where is the flooding going to be? So, for the folks watching, go to descarteslabs.com/search they got a search engine there, I wish we could show it on screen here but we don't have the terminal for it on this show. But it's a cool demo, you can search and find, you can pick an area, football field, and irrigation ditch, anything, cell tower, wind farm, and find duplicates and it gives you a map around the country. So the question is, is that, what is going on in the tech? 'Cause you got to use Cloud for this, so how do you make it all happen? >> Yeah, so we have two real big components to our tech space the first is, obviously we have lots and lots of satellite and aerial imagery, that's one of the biggest and messiest data sets and there's all types of calibration workloads that we have to do. So we have this ingest pipeline that processes it, cleans it, calibrates it, removes the clouds, not as in cloud computing infrastructure, but as in clouds over the head and then the shadows they emit down on the Earth. And we have this big ingestion process that cleans it all. And then finally compresses it and then we use things like GCS as an infinitely scalable object store. And what we really like on the GCS side is the performance we get 'cause we're reading and pulling in and out that compressed imagery all day long. So every time you zoom in or zoom out, like we're expanding it and removing that, but then our models, sometimes what we've done is, we'll want to maybe we're making a model in vegetation and we just want to look at the infrared bands. So we'll want to fuse together satellites from many different sources, fuse together ground sources, sensor sources, and just maybe pull in just one of those bands of light, not pull the whole files in. So that's what we've been building on our API. >> So how do you find GCP? What do you like? We've been all the users this week, what are the strengths? What are some of the weaknesses? What's on their to-do list? Documentation comes up a lot, we'd like to see better documentation, okay that's normal but what's your perspective? >> If you write code or develop, you always want something, you know it's always out of feature parody and stuff. From our perspective, the biggest strengths of GCP, one of the most core strengths is the network. The performance we've been able to see from the network is basically on par with what used to have, when we were at national laboratories we'd have access to high performance, super computing, some of the biggest clusters in the world. And in the network, in GCS and how we've been able scale linearly, like our ingest pipelines, we processed a petabyte of data on GCP in 16 hours through our processing pipeline on 30,000 cores. And we'll just scale that network bandwidth right up. >> Do you tap the premium network service or is it just the standard network? >> This is just stock. That was actually three years ago that we got to our bandwidth. >> How many cores? >> That was 30,000. >> Cause Google talked this morning about their standard network and the premium network, I don't know if you saw the keynote, with you get the low latency, if you pay a little bit more, proximate to your users, but you're saying on the standard network, you're getting just incredible... >> That was early 2015, it's just a few people in our company scaling up our ingest pipeline. We look at that, from then that was 40 years of imagery from NASA's Landsat program that we pulled in. And not that far off in the future, that petabyte's going to be a daily occurrence. So we wanted our ingest to scale and one of our big questions early on is actually, could the cloud actually even handle that type of scale? So that was one of the earliest workloads on things like-- >> And you feel good now about right? >> Oh yeah, and that was one of the first workloads on preemptible instances as well. >> What's on the to-do list? What would make your life better? >> So we've been working a lot with Istio that was shown here. So we actually gave a demo, we were in a couple talks yesterday on how we leverage and use Istio on our microservices. Our APIs are all built on that and so is our multi tenant SAS platform. So our ML team, when they're building models, they're all building models off different use cases, different bands of light, different geographic regions, different temporal windows. So we do all of that in Kubernetes and so those are all-- >> And what does Istio give you guys? What's the benefit of Istio? >> For us, we're using it on a few of our APIs and it's things like, really being able to see when you've start splitting out these microservices that network and that node-to-node or container-to-container latency and where things break down. Being about to do circuit retries or being able to try a response three different times before I return back a 500 or rate limit some of your APIs so they don't get crushed or you can scale them appropriately. And then actually being able to make custom metrics and to be able to fuse that back into how GKE scales on the node pools and stuff like that. >> So okay, that's how you're using it. So you were talking about Istio before, there's things that you'd like to see that aren't there today? More maturity or? >> Yeah I think Istio's like a very early starting point on all of this types of tools. >> So you want more? >> Oh yeah, definitely, definitely but I love the direction they're going and I love that it's open and if I ever wanted to I could build it on prem. But we were built basically native in the cloud so all of our infrastructure's in the cloud. We don't even have a physical server. >> What does open do for you, for your business? Is it just a good feeling? Do you feel like you're less locked in? Does it feel like you're giving back to the community? >> We read the Kubernetes source code. We've committed changes. Just recently, there's Google's open source, the OpenCensus library for tracing and things like that. We committed PRs back into that last week. We're looking for change. Something that doesn't quite work how we want, we can actually go.. >> Cause you're upstream >> Add value... >> For your business. >> We get in really hard problems, you kind of need to understand that code sometimes at that level. Build Tools, where Google took their internal tool, Blaze and opened source that bezel and so we're been using that. We're using that on our monorepos to do all of our builds. >> So you guys take it downstream, you work on it, and then all upstream contributions, is that how it works? >> Sometimes. >> Whenever you need to. >> Even Kubernetes, we've looked, if nothing else we've looked at the code multiple times and say, "Oh, this is why that autoscaler is behaving this way." Actually now I can understand how to change my workload a little bit and alter that so that the scaler works a little bit more performantly or we extract that last 10% of performance out to try and save that last 10%. >> This is a fascinating, I would love to come visit you guys and check out the facilities. It's the coolest thing ever. I think it's the future, there's so much tech going on. So many problems that are new and cool. You got the compute to boot behind it. Final question for you, how are you using analytics and machine learning? What's the key things you're using from Google? What are you guys building on your own? If anything, can you share a quick note on the ML and the analytics, how you guys are scaling that up? >> We've been using TensorFlow since very early days that geovisual search that you were saying, where we user TensorFlow models in some of those types of products. So we're big fans of that as well. And we'll keep building out models where it's appropriate. Sometimes we use very simple packages. You're just doing linear regression or things like that. >> So you're just applying that in. >> Yeah, it's the right tool for the right problem and always picking that and applying that. >> And just quick are you guys are for-profit, non-profit? What's the commercial? >> Yeah, we're for-profit, we're a Silicon Valley VC-backed company, even though we're in the mountains. >> Who's in the VCs? Which VCs are in? >> CrosslinK Capital is one our leading VCs, Eric Chin and that team down there and they've been great to work with. So they took a chance in a crazy bunch of scientists from up in the mountains of New Mexico. >> That sounds like a good VC back opportunity. >> Yeah and we had a CEO that was kind of from the Bay Area, Mark Johnson, and so we needed kind of both of those to really be successful. >> I mean I'm a big believer you throw money at great smart people and then merging markets like this. And you got a mission that's super cool, it's obvious that it's a lot to do and there's opportunities as well. >> Tremendous opportunities. Congratulations, Tim. Thanks for coming on The Cube. Tim Kelton, he's the co-founder at Descartes Labs. Here in The Cube, breaking down, bringing the technology, they got applied physicists, all these brains working on the geospatial future for The Cube. We are geospatial here in The Cube, in Google Next in San Francisco, I'm John Furrier, Dave Vellante, stay with us, for more coverage after this short break.

Published Date : Jul 25 2018

SUMMARY :

Brought to you by, Google Cloud a lot of new announcements, of of the app that you have, and applying that to real world use cases. And what's the main thing you guys do that gives you the pure answer, right? of the tech in a second. and then say, as we get on the left side of the road Yeah and that's the But I got to ask you a question, We have from the tech side, So now, as the seasons come in, and notify someone, Yeah and the last couple years and it gives you a map around the country. the first is, obviously we And in the network, in GCS that we got to our bandwidth. and the premium network, And not that far off in the future, one of the first workloads Kubernetes and so those are all-- on the node pools and stuff like that. So you were talking about Istio before, on all of this and I love that it's open We read the Kubernetes source code. and opened source that bezel so that the scaler works and the analytics, how you that you were saying, and always picking that and applying that. Yeah, we're for-profit, Eric Chin and that team down there That sounds like a Mark Johnson, and so we And you got a mission that's super cool, Tim Kelton, he's the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim KeltonPERSON

0.99+

Dave VellantePERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Mark JohnsonPERSON

0.99+

NASAORGANIZATION

0.99+

TimPERSON

0.99+

CaliforniaLOCATION

0.99+

Descartes LabsORGANIZATION

0.99+

twoQUANTITY

0.99+

John FurrierPERSON

0.99+

Descartes LabsORGANIZATION

0.99+

EarthLOCATION

0.99+

San FranciscoLOCATION

0.99+

30,000QUANTITY

0.99+

30,000 coresQUANTITY

0.99+

Eric ChinPERSON

0.99+

40 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

16 hoursQUANTITY

0.99+

Bay AreaLOCATION

0.99+

four kidsQUANTITY

0.99+

firstQUANTITY

0.99+

two partsQUANTITY

0.99+

CrosslinK CapitalORGANIZATION

0.99+

late 2014DATE

0.99+

early 2015DATE

0.99+

three daysQUANTITY

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

UberORGANIZATION

0.99+

New York CityLOCATION

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

three years agoDATE

0.99+

last FridayDATE

0.99+

500QUANTITY

0.98+

oneQUANTITY

0.98+

last weekDATE

0.98+

10%QUANTITY

0.98+

descarteslabs.com/searchOTHER

0.97+

around 80 employeesQUANTITY

0.97+

2018DATE

0.97+

2014DATE

0.97+

New MexicoLOCATION

0.97+

2007DATE

0.97+

U.S.LOCATION

0.96+

this weekDATE

0.96+

FloridaLOCATION

0.96+

this FallDATE

0.94+

KubernetesTITLE

0.94+

2008DATE

0.94+

OpenCensusTITLE

0.92+

IstioORGANIZATION

0.9+

last 15 yearsDATE

0.89+

nearly 15 petabytesQUANTITY

0.89+

last couple yearsDATE

0.88+

first workloadsQUANTITY

0.87+

Google CloudTITLE

0.86+

TensorFlowTITLE

0.86+

couple hoursQUANTITY

0.81+

IstioTITLE

0.81+

threeQUANTITY

0.8+

The CubeORGANIZATION

0.8+

every five minutesQUANTITY

0.77+

day twoQUANTITY

0.77+

BlazeTITLE

0.77+

Los Alamos National LaboratoryORGANIZATION

0.76+

two real big componentsQUANTITY

0.76+