Armando Acosta, Dell Technologies and Matt Leininger, Lawrence Livermore National Laboratory
(upbeat music) >> We are back, approaching the finish line here at Supercomputing 22, our last interview of the day, our last interview of the show. And I have to say Dave Nicholson, my co-host, My name is Paul Gillin. I've been attending trade shows for 40 years Dave, I've never been to one like this. The type of people who are here, the type of problems they're solving, what they talk about, the trade shows are typically, they're so speeds and feeds. They're so financial, they're so ROI, they all sound the same after a while. This is truly a different event. Do you get that sense? >> A hundred percent. Now, I've been attending trade shows for 10 years since I was 19, in other words, so I don't have necessarily your depth. No, but seriously, Paul, totally, completely, completely different than any other conference. First of all, there's the absolute allure of looking at the latest and greatest, coolest stuff. I mean, when you have NASA lecturing on things when you have Lawrence Livermore Labs that we're going to be talking to here in a second it's a completely different story. You have all of the academics you have students who are in competition and also interviewing with organizations. It's phenomenal. I've had chills a lot this week. >> And I guess our last two guests sort of represent that cross section. Armando Acosta, director of HPC Solutions, High Performance Solutions at Dell. And Matt Leininger, who is the HPC Strategist at Lawrence Livermore National Laboratory. Now, there is perhaps, I don't know you can correct me on this, but perhaps no institution in the world that uses more computing cycles than Lawrence Livermore National Laboratory and is always on the leading edge of what's going on in Supercomputing. And so we want to talk to both of you about that. Thank you. Thank you for joining us today. >> Sure, glad to be here. >> For having us. >> Let's start with you, Armando. Well, let's talk about the juxtaposition of the two of you. I would not have thought of LLNL as being a Dell reference account in the past. Tell us about the background of your relationship and what you're providing to the laboratory. >> Yeah, so we're really excited to be working with Lawrence Livermore, working with Matt. But actually this process started about two years ago. So we started looking at essentially what was coming down the pipeline. You know, what were the customer requirements. What did we need in order to make Matt successful. And so the beauty of this project is that we've been talking about this for two years, and now it's finally coming to fruition. And now we're actually delivering systems and delivering racks of systems. But what I really appreciate is Matt coming to us, us working together for two years and really trying to understand what are the requirements, what's the schedule, what do we need to hit in order to make them successful >> At Lawrence Livermore, what drives your computing requirements I guess? You're working on some very, very big problems but a lot of very complex problems. How do you decide what you need to procure to address them? >> Well, that's a difficult challenge. I mean, our mission is a national security mission dealing with making sure that we do our part to provide the high performance computing capabilities to the US Department of Energy's National Nuclear Security Administration. We do that through the Advanced Simulation computing program. Its goal is to provide that computing power to make sure that the US nuclear rep of the stockpile is safe, secure, and effective. So how we go about doing that? There's a lot of work involved. We have multiple platform lines that we accomplish that goal with. One of them is the advanced technology systems. Those are the ones you've heard about a lot, they're pushing towards exit scale, the GPU technologies incorporated into those. We also have a second line, a platform line, called the Commodity Technology Systems. That's where right now we're partnering with Dell on the latest generation of those. Those systems are a little more conservative, they're right now CPU only driven but they're also intended to be the everyday work horses. So those are the first systems our users get on. It's very easy for them to get their applications up and running. They're the first things they use usually on a day to day basis. They run a lot of small to medium size jobs that you need to do to figure out how to most effectively use what workloads you need to move to the even larger systems to accomplish our mission goals. >> The workhorses. >> Yeah. >> What have you seen here these last few days of the show, what excites you? What are the most interesting things you've seen? >> There's all kinds of things that are interesting. Probably most interesting ones I can't talk about in public, unfortunately, 'cause of NDA agreements, of course. But it's always exciting to be here at Supercomputing. It's always exciting to see the products that we've been working with industry and co-designing with them on for, you know, several years before the public actually sees them. That's always an exciting part of the conference as well specifically with CTS-2, it's exciting. As was mentioned before, I've been working with Dell for nearly two years on this, but the systems first started being delivered this past August. And so we're just taking the initial deliveries of those. We've deployed, you know, roughly about 1600 nodes now but that'll ramp up to over 6,000 nodes over the next three or four months. >> So how does this work intersect with Sandia and Los Alamos? Explain to us the relationship there. >> Right, so those three laboratories are the laboratories under the National Nuclear Security Administration. We partner together on CTS. So the architectures, as you were asking, how do we define these things, it's the labs coming together. Those three laboratories we define what we need for that architecture. We have a joint procurement that is run out of Livermore but then the systems are deployed at all three laboratories. And then they serve the programs that I mentioned for each laboratory as well. >> I've worked in this space for a very long time you know I've worked with agencies where the closest I got to anything they were actually doing was the sort of guest suite outside the secure area. And sometimes there are challenges when you're communicating, it's like you have a partner like Dell who has all of these things to offer, all of these ideas. You have requirements, but maybe you can't share 100% of what you need to do. How do you navigate that? Who makes the decision about what can be revealed in these conversations? You talk about NDA in terms of what's been shared with you, you may be limited in terms of what you can share with vendors. Does that cause inefficiency? >> To some degree. I mean, we do a good job within the NSA of understanding what our applications need and then mapping that to technical requirements that we can talk about with vendors. We also have kind of in between that we've done this for many years. A recent example is of course with the exit scale computing program and some things it's doing creating proxy apps or mini apps that are smaller versions of some of the things that we are important to us. Some application areas are important to us, hydrodynamics, material science, things like that. And so we can collaborate with vendors on those proxy apps to co-design systems and tweak the architectures. In fact, we've done a little bit that with CTS-2, not as much in CTS as maybe in the ATS platforms but that kind of general idea of how we collaborate through these proxy applications is something we've used across platforms. >> Now is Dell one of your co-design partners? >> In CTS-2 absolutely, yep. >> And how, what aspects of CTS-2 are you working on with Dell? >> Well, the architecture itself was the first, you know thing we worked with them on, we had a procurement come out, you know they bid an architecture on that. We had worked with them, you know but previously on our requirements, understanding what our requirements are. But that architecture today is based on the fourth generation Intel Xeon that you've heard a lot about at the conference. We are one of the first customers to get those systems in. All the systems are interconnected together with the Cornell Network's Omni-Path Network that we've used before and are very excited about as well. And we build up from there. The systems get integrated in by the operations teams at the laboratory. They get integrated into our production computing environment. Dell is really responsible, you know for designing these systems and delivering to the laboratories. The laboratories then work with Dell. We have a software stack that we provide on top of that called TOSS, for Tri-Lab Operating System. It's based on Redhead Enterprise Linux. But the goal there is that it allows us, a common user environment, a common simulation environment across not only CTS-2, but maybe older systems we have and even the larger systems that we'll be deploying as well. So from a user perspective they see a common user interface, a common environment across all the different platforms that they use at Livermore and the other laboratories. >> And Armando, what does Dell get out of the co-design arrangement with the lab? >> Well, we get to make sure that they're successful. But the other big thing that we want to do, is typically when you think about Dell and HPC, a lot of people don't make that connection together. And so what we're trying to do is make sure that, you know they know that, hey, whether you're a work group customer at the smallest end or a super computer customer at the highest end, Dell wants to make sure that we have the right setup portfolio to match any needs across this. But what we were really excited about this, this is kind of our, you know big CTS-2 first thing we've done together. And so, you know, hopefully this has been successful. We've made Matt happy and we look forward to the future what we can do with bigger and bigger things. >> So will the labs be okay with Dell coming up with a marketing campaign that said something like, "We can't confirm that alien technology is being reverse engineered." >> Yeah, that would fly. >> I mean that would be right, right? And I have to ask you the question directly and the way you can answer it is by smiling like you're thinking, what a stupid question. Are you reverse engineering alien technology at the labs? >> Yeah, you'd have to suck the PR office. >> Okay, okay. (all laughing) >> Good answer. >> No, but it is fascinating because to a degree it's like you could say, yeah, we're working together but if you really want to dig into it, it's like, "Well I kind of can't tell you exactly how some of this stuff is." Do you consider anything that you do from a technology perspective, not what you're doing with it, but the actual stack, do you try to design proprietary things into the stack or do you say, "No, no, no, we're going to go with standards and then what we do with it is proprietary and secret."? >> Yeah, it's more the latter. >> Is the latter? Yeah, yeah, yeah. So you're not going to try to reverse engineer the industry? >> No, no. We want the solutions that we develop to enhance the industry to be able to apply to a broader market so that we can, you know, gain from the volume of that market, the lower cost that they would enable, right? If we go off and develop more and more customized solutions that can be extraordinarily expensive. And so we we're really looking to leverage the wider market, but do what we can to influence that, to develop key technologies that we and others need that can enable us in the high forms computing space. >> We were talking with Satish Iyer from Dell earlier about validated designs, Dell's reference designs for for pharma and for manufacturing, in HPC are you seeing that HPC, Armando, and is coming together traditionally and more of an academic research discipline beginning to come together with commercial applications? And are these two markets beginning to blend? >> Yeah, I mean so here's what's happening, is you have this convergence of HPC, AI and data analytics. And so when you have that combination of those three workloads they're applicable across many vertical markets, right? Whether it's financial services, whether it's life science, government and research. But what's interesting, and Matt won't brag about, but a lot of stuff that happens in the DoE labs trickles down to the enterprise space, trickles down to the commercial space because these guys know how to do it at scale, they know how to do it efficiently and they know how to hit the mark. And so a lot of customers say, "Hey we want what CTS-2 does," right? And so it's very interesting. The way I love it is their process the way they do the RFP process. Matt talked about the benchmarks and helping us understand, hey here's kind of the mark you have to hit. And then at the same time, you know if we make them successful then obviously it's better for all of us, right? You know, I want to secure nuclear stock pile so I hope everybody else does as well. >> The software stack you mentioned, I think Tia? >> TOSS. >> TOSS. >> Yeah. >> How did that come about? Why did you feel the need to develop your own software stack? >> It originated back, you know, even 20 years ago when we first started building Linux clusters when that was a crazy idea. Livermore and other laboratories were really the first to start doing that and then push them to larger and larger scales. And it was key to have Linux running on that at the time. And so we had the. >> So 20 years ago you knew you wanted to run on Linux? >> Was 20 years ago, yeah, yeah. And we started doing that but we needed a way to have a version of Linux that we could partner with someone on that would do, you know, the support, you know, just like you get from an EoS vendor, right? Security support and other things. But then layer on top of that, all the HPC stuff you need either to run the system, to set up the system, to support our user base. And that evolved into to TOSS which is the Tri-Lab Operating System. Now it's based on the latest version of Redhead Enterprise Linux, as I mentioned before, with all the other HPC magic, so to speak and all that HPC magic is open source things. It's not stuff, it may be things that we develop but it's nothing closed source. So all that's there we run it across all these different environments as I mentioned before. And it really originated back in the early days of, you know, Beowulf clusters, Linux clusters, as just needing something that we can use to run on multiple systems and start creating that common environment at Livermore and then eventually the other laboratories. >> How is a company like Dell, able to benefit from the open source work that's coming out of the labs? >> Well, when you look at the open source, I mean open source is good for everybody, right? Because if you make a open source tool available then people start essentially using that tool. And so if we can make that open source tool more robust and get more people using it, it gets more enterprise ready. And so with that, you know, we're all about open source we're all about standards and really about raising all boats 'cause that's what open source is all about. >> And with that, we are out of time. This is our 28th interview of SC22 and you're taking us out on a high note. Armando Acosta, director of HPC Solutions at Dell. Matt Leininger, HPC Strategist, Lawrence Livermore National Laboratories. Great discussion. Hopefully it was a good show for you. Fascinating show for us and thanks for being with us today. >> Thank you very much. >> Thank you for having us >> Dave it's been a pleasure. >> Absolutely. >> Hope we'll be back next year. >> Can't believe, went by fast. Absolutely at SC23. >> We hope you'll be back next year. This is Paul Gillin. That's a wrap, with Dave Nicholson for theCUBE. See here in next time. (soft upbear music)
SUMMARY :
And I have to say Dave You have all of the academics and is always on the leading edge about the juxtaposition of the two of you. And so the beauty of this project How do you decide what you need that you need to do but the systems first Explain to us the relationship there. So the architectures, as you were asking, 100% of what you need to do. And so we can collaborate with and the other laboratories. And so, you know, hopefully that said something like, And I have to ask you and then what we do with it reverse engineer the industry? so that we can, you know, gain And so when you have that combination running on that at the time. all the HPC stuff you need And so with that, you know, and thanks for being with us today. Absolutely at SC23. with Dave Nicholson for theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Leininger | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
National Nuclear Security Administration | ORGANIZATION | 0.99+ |
Armando Acosta | PERSON | 0.99+ |
Cornell Network | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
CTS-2 | TITLE | 0.99+ |
US Department of Energy | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Lawrence Livermore | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
CTS | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
HPC Solutions | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Los Alamos | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.99+ |
Armando | ORGANIZATION | 0.99+ |
each laboratory | QUANTITY | 0.99+ |
second line | QUANTITY | 0.99+ |
over 6,000 nodes | QUANTITY | 0.99+ |
20 years ago | DATE | 0.98+ |
three laboratories | QUANTITY | 0.98+ |
28th interview | QUANTITY | 0.98+ |
Lawrence Livermore National Laboratories | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Tri-Lab | ORGANIZATION | 0.98+ |
Sandia | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
two markets | QUANTITY | 0.97+ |
Supercomputing | ORGANIZATION | 0.96+ |
first systems | QUANTITY | 0.96+ |
fourth generation | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Livermore | ORGANIZATION | 0.96+ |
Omni-Path Network | ORGANIZATION | 0.95+ |
about 1600 nodes | QUANTITY | 0.95+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.94+ |
LLNL | ORGANIZATION | 0.93+ |
NDA | ORGANIZATION | 0.93+ |
Robin Goldstone, Lawrence Livermore National Laboratory | Red Hat Summit 2019
>> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen brought to you by bread. Welcome back a few, but our way Our red have some twenty nineteen >> center along with Sue Mittleman. I'm John Walls were now joined by Robin Goldstone, who's HBC solution architect at the Lawrence Livermore National Laboratory. Hello, Robin >> Harrier. Good to see you. I >> saw you on the Keystone States this morning. Fascinating presentation, I thought. First off for the viewers at home who might not be too familiar with the laboratory If you could please just give it that thirty thousand foot level of just what kind of national security work you're involved with. >> Sure. So yes, indeed. We are a national security lab. And you know, first and foremost, our mission is assuring the safety, security reliability of our nuclear weapons stockpile. And there's a lot to that mission. But we also have broader national security mission. We work on counterterrorism and nonproliferation, a lot of of cyber security kinds of things. And but even just general science. We're doing things with precision medicine and and just all all sorts >> of interesting technology. Fascinating >> Es eso, Robin, You know so much and i t you know, the buzzword. The vast months years has been scaled on. We talk about what public loud people are doing. It's labs like yours have been challenged. Challenge with scale in many other ways, especially performance is something that you know, usually at the forefront of where things are you talked about in the keynote this morning. Sierra is the latest generation supercomputer number two, you know, supercomputer. So you know, I don't know how many people understand the petaflop one hundred twenty five flops and the like, but tell us a little bit about, you know, kind of the why and the what of that, >> right? So So Sierra's a supercomputer. And what's unique about these systems is that we're solving. There's lots of systems that network together. Maybe you're bigger number of servers than us, but we're doing scientific simulation, and that kind of computing requires a level of parallelism and very tightly coupled. So all the servers are running a piece of the problem. They all have to sort of operate together. If any one of them is running slow, it makes the whole thing goes slow. So it's really this tightly couple nature of super computers that make things really challenging. You know, we talked about performance. If if one servers just running slow for some reason, you know everything else is going to be affected by that. So we really do care about performance. And we really do care about just every little piece of the hardware you know, performing as it should. So So I >> think in national security, nuclear stockpiles. Um I mean, there is nothing more important, obviously, than the safety and security of the American people were at the center of that. Right? You're open source, right? You know, how does that work? How does that? Because as much trust and faith and confidence we have in the open source community. This is an extremely important responsibility that's being consigned more less to this open source community. >> Sure. You know, at first, people do have that feeling that we should be running some secret sauce. I mean, our applications themselves or secret. But when it comes to the system software and all the software around the applications, I mean, open source makes perfect sense. I mean, we started out running really closed source solutions in some cases, the perp. The hardware itself was really proprietary. And, of course, the vendors who made the hardware proprietary. They wanted their software to be proprietary. But I think most people can resonate when you buy a piece of software and the vendor tells you it's it's great. It's going to do everything you needed to do and trust us, right? Okay, But at our scale, it often doesn't work the way it's It's supposed to work. They've never tested it. Our skill. And when it breaks, now they have to fix. They're the only ones that can fix it. And in some cases we found it wasn't in the vendors decided. You know what? No one else has one quite like yours. And you know, it's a lot of work to make it work for you. So we're just not going to fix and you can't wait, right? And so open source is just the opposite of that, right? I mean, we have all that visibility in that software. If it doesn't work for our needs, we can make it work for our needs, and then we can give it back to the community. Because even though people are doing things that the scale that we are today, Ah, lot of the things that we're doing really do trickle down and can be used by a lot of other people. >> But it's something really important because, as you said, you used to be and I was like, OK, the Cray supercomputer is what we know, You know, let's use proprietary interfaces and I need the highest speed and therefore it's not the general purpose stuff. You moved X eighty six. Lennox is something that's been in the shower computers. Why? But it's a finely tuned version there. Let's get you know, the duct tape and baling wire. And don't breathe on it once you get it running. You're running well today and you talk a little bit about the journey with Roland. You know, now on the Super Computers, >> right? So again, there's always been this sort of proprietary, really high end supercomputing. But about in the late nineteen nineties, early two thousand, that's when we started building these these commodity clusters. You know, at the time, I think Beta Wolf was the terminology for that. But, you know, basically looking at how we could take these basic off the shelf servers and make them work for our applications and trying to take advantage of a CZ much commodity technologies we can, because we didn't want to re invent anything. We want to use as much as possible. And so we've really written that curve. And initially it was just red hat. Lennox. There was no relative time, but then when we started getting into the newer architectures going from Mexico six. Taxi, six, sixty for and Itanium, you know the support just wasn't there in basic red hat and again, even though it's open source and we could do everything ourselves, we don't want to do everything ourselves. I mean, having an organization having this Enterprise edition of Red Hat having a company stand behind it. The software is still open. Source. We can look at the source code. We can modify it if we want, But you know what at the end of the day, were happy to hand over some of our challenge is to Red Hat and and let them do what they do best. They have great, you know, reach into the into the colonel community. They can get things done that we can't necessarily get done. So it's a great relationship. >> Yes. So that that last mile getting it on Sierra there. Is that the first time on one kind of the big showcase your computer? >> Sure. And part of the reason for that is because those big computers themselves are basically now mostly commodity. I mean, again, you talked about a Cray, Some really exotic architecture. I mean, Sierra is a collection of Lennox servers. Now, in this case, they're running the power architecture instead of X eighty six. So Red hat did a lot of work with IBM to make sure that that power was was fully supported in the rail stack. But so, you know, again that the service themselves somewhat commodity were running and video GP use those air widely used everywhere. Obviously big deal for machine learning and stuff that the main the biggest proprietary component we're still dealing was is thie interconnect. So, you know, I mentioned these clusters have to be really tightly coupled. They that performance has to be really superior and most importantly, the latent see right, they have to be super low late and see an ethernet just doesn't cut it >> So you run Infinite Band today. I'm assuming we're >> running infinite band on melon oxen finna ban on Sierra on some of our commodity clusters. We run melon ox on other ones. We run intel. Omni Path was just another flavor of of infinite band. You know, if we could use it, if we could use Ethernet, we would, because again, we would get all the benefit in the leverage of what everybody else is doing, but just just hasn't hasn't quite been able to meet our needs in that >> area now, uh, find recalled the history lesson. We got a bit from me this morning. The laboratory has been around since the early fifties, born of the Cold War. And so obviously open source was, you know? Yeah, right, you know, went well. What about your evolution to open source? I mean, ahs. This has taken hold. Now, there had to be a tipping point at some point that converted and made the laboratory believers. But if you can, can you go back to that process? And was it of was it a big moment for you big time? Or was it just a kind of a steady migration? tour. >> Well, it's interesting if you go way back. We actually wrote the operating systems for those early Cray computers. We wrote those operating systems in house because there really was no operating system that will work for us. So we've been software developers for a long time. We've been system software developers, but at that time it was all proprietary in closed source. So we know how to do that stuff. The reason I think really what happened was when these commodity clusters came along when we showed that we could build a, you know, a cluster that could perform well for our applications on that commodity hardware. We started with Red Hat, but we had to add some things on top. We had to add the software that made a bunch of individual servers function as a cluster. So all the system management stuff the resource manager of the thing that lets a schedule jobs, batch jobs. We wrote that software, the parallel file system. Those things did not exist in the open source, and we helped to write those things, and those things took on lives of their own. So luster. It's a parallel file system that we helped develop slow, Erm, if anyone outside of HBC probably hasn't heard of it, but it's a resource manager that again is very widely popular. So the lab really saw that. You know, we got a lot of visibility by contributing this stuff to the community. And I think everybody has embracing. And we develop open source software at all different layers. This >> software, Robin, you know, I'm curious how you look at Public Cloud. So, you know, when I look at the public odd, they do a lot with government agencies. They got cloud. You know, I've talked to companies that said I could have built a super computer. Here's how long and do. But I could spend it up in minutes. And you know what I need? Is that a possibility for something of yours? I understand. Maybe not the super high performance, But where does it fit in? >> Sure, Yeah. I mean, certainly for a company that has no experience or no infrastructure. I mean, we have invested a huge amount in our data center, and we have a ton of power and cooling and floor space. We have already made that investment, you know, trying to outsource that to the cloud doesn't make sense. There are definitely things. Cloud is great. We are using Gove Cloud for things like prototyping, or someone wants a server, that some architecture, that we don't have the ability to just spin it up. You know, if we had to go and buy it, it would take six months because you know, we are the government. But be able to just spin that stuff up. It's really great for what we do. We use it for open source for building test. We use it to conferences when we want to run a tutorial and spin up a bunch of instances of, you know, Lennox and and run a tutorial. But the biggest thing is at the end of the day are our most important work. Clothes are on a classified environment, and we don't have the ability to run those workloads in the cloud. And so to do it on the open side and not be ableto leverage it on the close side, it really takes away some of the value of because we really want to make the two environments look a similar is possible leverage our staff and and everything like that. So that's where Cloud just doesn't quite fit >> in for us. You were talking about, you know, the speed of, Of of Sierra. And then also mentioning El Capitan, which is thie the next generation. You're next, You know, super unbelievably fast computer to an extent of ten X that off current speed is within the next four to five years. >> Right? That's the goal. I >> mean, what those Some numbers that is there because you put a pretty impressive array up there, >> right? So Series about one hundred twenty five PETA flops and are the big Holy Grail for high performance computing is excess scale and exit flop of performance. And so, you know, El Capitan is targeted to be, you know, one point two, maybe one point five exit flops or even Mohr again. That's peak performance. It doesn't necessarily translate into what our applications, um, I can get out of the platform. But the reason you keep sometimes I think, isn't it enough isn't one hundred twenty five five's enough, But it's never enough because any time we get another platform, people figure out how to do things with it that they've never done before. Either they're solving problems faster than they could. And so now they're able to explore a solution space much faster. Or they want to look at, you know, these air simulations of three dimensional space, and they want to be able to look at it in a more fine grain level. So again, every computer we get, we can either push a workload through ten times faster. Or we can look at a simulation. You know, that's ten times more resolved than the one that >> we could do before. So do this for made and for folks at home and take the work that you do and translate that toe. Why that exponential increase in speed will make you better. What you do in terms of decision making and processing of information, >> right? So, yeah, so the thing is, these these nuclear weapons systems are very complicated. There's multi physics. There's lots of different interactions going on, and to really understand them at the lowest level. One of the reasons that's so important now is we're maintaining a stockpile that is well beyond the life span that it was designed for. You know, these nuclear weapons, some of them were built in the fifties, the sixties and seventies. They weren't designed to last this long, right? And so now they're sort of out of their design regime, and we really have to understand their behaviour and their properties as they age. So it opens up a whole nother area, you know, that we have to be able to floor and and just some of that physics has never been explored before. So, you know, the problems get more challenging the farther we get away from the design basis of these weapons, but also were really starting to do new things like eh, I am machine learning things that weren't part of our workflow before. We're starting to incorporate machine learning in with simulation again to help explore a very large problem space and be ableto find interesting areas within a simulation to focus in on. And so that's a really exciting area. And that is also an area where, you know, GPS and >> stuff just exploded. You know, the performance levels that people are seeing on these machines? Well, we thank you for your work. It is critically important, azaz, we all realize and wonderfully fascinating at the same time. So thanks for the insights here on for your time. We appreciate that. >> All right, Thanks for >> thanking Robin Goldstone. Joining us back with more here on the Cube. You're watching our coverage live from Boston of Red Hat Summit twenty nineteen.
SUMMARY :
Have some twenty nineteen brought to you by bread. center along with Sue Mittleman. Good to see you. saw you on the Keystone States this morning. And you know, of interesting technology. five flops and the like, but tell us a little bit about, you know, kind of the why and the what And we really do care about just every little piece of the hardware you know, in the open source community. And you know, it's a lot of work to make it work for you. Let's get you know, We can modify it if we want, But you know what at the end of the day, were happy to hand over Is that the first time on one kind of the But so, you know, again that the service themselves So you run Infinite Band today. You know, if we could use it, if we could use Ethernet, And so obviously open source was, you know? came along when we showed that we could build a, you know, a cluster that So, you know, when I look at the public odd, they do a lot with government agencies. You know, if we had to go and buy it, it would take six months because you know, we are the government. You were talking about, you know, the speed of, Of of Sierra. That's the goal. And so, you know, El Capitan is targeted to be, you know, one point two, So do this for made and for folks at home and take the work that you do And that is also an area where, you know, GPS and Well, we thank you for your work. of Red Hat Summit twenty nineteen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sue Mittleman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
ten times | QUANTITY | 0.99+ |
Cold War | EVENT | 0.99+ |
six months | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
HBC | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
El Capitan | TITLE | 0.99+ |
thirty thousand foot | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
late nineteen nineties | DATE | 0.98+ |
Mexico | LOCATION | 0.98+ |
one hundred | QUANTITY | 0.98+ |
Harrier | PERSON | 0.98+ |
five years | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
Cray | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
Boston | LOCATION | 0.96+ |
early fifties | DATE | 0.96+ |
red hat | TITLE | 0.96+ |
twenty nineteen | QUANTITY | 0.96+ |
Sierra | LOCATION | 0.96+ |
first | QUANTITY | 0.95+ |
this morning | DATE | 0.93+ |
ten | QUANTITY | 0.93+ |
six | QUANTITY | 0.92+ |
one hundred twenty five flops | QUANTITY | 0.9+ |
sixties | DATE | 0.89+ |
one servers | QUANTITY | 0.88+ |
Itanium | ORGANIZATION | 0.87+ |
intel | ORGANIZATION | 0.86+ |
Of of Sierra | ORGANIZATION | 0.86+ |
First | QUANTITY | 0.83+ |
five | QUANTITY | 0.82+ |
Sierra | ORGANIZATION | 0.8+ |
Red Hat | ORGANIZATION | 0.8+ |
Red Hat Summit 2019 | EVENT | 0.79+ |
Roland | ORGANIZATION | 0.79+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.79+ |
Red Hat Summit twenty | EVENT | 0.79+ |
two | QUANTITY | 0.78+ |
Keystone States | LOCATION | 0.78+ |
seventies | DATE | 0.78+ |
Red | ORGANIZATION | 0.76+ |
twenty five five | QUANTITY | 0.73+ |
early two thousand | DATE | 0.71+ |
Lawrence Livermore | LOCATION | 0.71+ |
Sierra | COMMERCIAL_ITEM | 0.69+ |
Erm | PERSON | 0.66+ |
Mohr | PERSON | 0.65+ |
supercomputer | QUANTITY | 0.64+ |
one hundred twenty five | QUANTITY | 0.62+ |
Path | OTHER | 0.59+ |
Band | OTHER | 0.58+ |
National Laboratory | ORGANIZATION | 0.55+ |
band | OTHER | 0.55+ |
Gove Cloud | TITLE | 0.54+ |
nineteen | QUANTITY | 0.53+ |
fifties | DATE | 0.52+ |
number | QUANTITY | 0.52+ |
Beta Wolf | OTHER | 0.52+ |
dimensional | QUANTITY | 0.49+ |
sixty | ORGANIZATION | 0.47+ |
six | COMMERCIAL_ITEM | 0.45+ |
American | PERSON | 0.43+ |
Sierra | TITLE | 0.42+ |
theCUBE Previews Supercomputing 22
(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)
SUMMARY :
And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danny Hillis | PERSON | 0.99+ |
Steve Chen | PERSON | 0.99+ |
NEC | ORGANIZATION | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Steve Wallach | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Steve Frank | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seymour Cray | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Unisys | ORGANIZATION | 0.99+ |
1997 | DATE | 0.99+ |
Savannah | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Controlled Data Corporations | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Penguin Solutions | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Tuesday | DATE | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
iPhone 12 | COMMERCIAL_ITEM | 0.99+ |
10 | QUANTITY | 0.99+ |
Cray | PERSON | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
CDC | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kendall Square Research | ORGANIZATION | 0.99+ |
iPhone 14 | COMMERCIAL_ITEM | 0.99+ |
john@siliconangle.com | OTHER | 0.99+ |
$2 million | QUANTITY | 0.99+ |
November 13th | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
more than half a billion dollars | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
seven people | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
mid 1960s | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Convex | ORGANIZATION | 0.99+ |
70's | DATE | 0.99+ |
SC22 | EVENT | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
late 80's | DATE | 0.98+ |
80's | DATE | 0.98+ |
ES7000 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
almost $2 million | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
20 years later | DATE | 0.98+ |
tens of millions of dollars | QUANTITY | 0.98+ |
Sunday | DATE | 0.98+ |
Japanese | OTHER | 0.98+ |
90's | DATE | 0.97+ |
Making AI Real – A practitioner’s view | Exascale Day
>> Narrator: From around the globe, it's theCUBE with digital coverage of Exascale day, made possible by Hewlett Packard Enterprise. >> Hey, welcome back Jeff Frick here with the cube come due from our Palo Alto studios, for their ongoing coverage in the celebration of Exascale day 10 to the 18th on October 18th, 10 with 18 zeros, it's all about big powerful giant computing and computing resources and computing power. And we're excited to invite back our next guest she's been on before. She's Dr. Arti Garg, head of advanced AI solutions and technologies for HPE. Arti great to see you again. >> Great to see you. >> Absolutely. So let's jump into before we get into Exascale day I was just looking at your LinkedIn profile. It's such a very interesting career. You've done time at Lawrence Livermore, You've done time in the federal government, You've done time at GE and industry, I just love if you can share a little bit of your perspective going from hardcore academia to, kind of some government positions, then into industry as a data scientist, and now with originally Cray and now HPE looking at it really from more of a vendor side. >> Yeah. So I think in some ways, I think I'm like a lot of people who've had the title of data scientists somewhere in their history where there's no single path, to really working in this industry. I come from a scientific background. I have a PhD in physics, So that's where I started working with large data sets. I think of myself as a data scientist before the term data scientist was a term. And I think it's an advantage, to be able to have seen this explosion of interest in leveraging data to gain insights, whether that be into the structure of the galaxy, which is what I used to look at, or whether that be into maybe new types of materials that could advance our ability to build lightweight cars or safety gear. It's allows you to take a perspective to not only understand what the technical challenges are, but what also the implementation challenges are, and why it can be hard to use data to solve problems. >> Well, I'd just love to get your, again your perspective cause you are into data, you chose that as your profession, and you probably run with a whole lot of people, that are also like-minded in terms of data. As an industry and as a society, we're trying to get people to do a better job of making database decisions and getting away from their gut and actually using data. I wonder if you can talk about the challenges of working with people who don't come from such an intense data background to get them to basically, I don't know if it's understand the value of more of a data kind decision making process or board just it's worth the effort, cause it's not easy to get the data and cleanse the data, and trust the data and get the right context, working with people that don't come from that background. And aren't so entrenched in that point of view, what surprises you? How do you help them? What can you share in terms of helping everybody get to be a more data centric decision maker? >> So I would actually rephrase the question a little bit Jeff, and say that actually I think people have always made data driven decisions. It's just that in the past we maybe had less data available to us or the quality of it was not as good. And so as a result most organizations have developed organize themselves to make decisions, to run their processes based on a much smaller and more refined set of information, than is currently available both given our ability to generate lots of data, through software and sensors, our ability to store that data. And then our ability to run a lot of computing cycles and a lot of advanced math against that data, to learn things that maybe in the past took, hundreds of years of experiments in scientists to understand. And so before I jumped into, how do you overcome that barrier? Just I'll use an example because you mentioned, I used to work in industry I used to work at GE. And one of the things that I often joked about, is the number of times I discovered Bernoulli's principle, in data coming off a GE jet engines you could do that overnight processing these large data but of course historically that took hundreds of years, to really understand these physical principles. And so I think when it comes to how do we bridge the gap between people who are adapt at processing large amounts of data, and running algorithms to pull insights out? I think it's both sides. I think it's those of us who are coming from the technical background, really understanding the way decisions are currently made, the way process and operations currently work at an organization. And understanding why those things are the way they are maybe their security or compliance or accountability concerns, that a new algorithm can't just replace those. And so I think it's on our end, really trying to understand, and make sure that whatever new approaches we're bringing address those concerns. And I think for folks who aren't necessarily coming from a large data set, and analytical background and when I say analytical, I mean in the data science sense, not in the sense of thinking about things in an abstract way to really recognize that these are just tools, that can enhance what they're doing, and they don't necessarily need to be frightening because I think that people who have been say operating electric grids for a long time, or fixing aircraft engines, they have a lot of expertise and a lot of understanding, and that's really important to making any kind of AI driven solution work. >> That's great insight but that but I do think one thing that's changed you come from a world where you had big data sets, so you kind of have a big data set point of view, where I think for a lot of decision makers they didn't have that data before. So we won't go through all the up until the right explosions of data, and obviously we're talking about Exascale day, but I think for a lot of processes now, the amount of data that they can bring to bear, is so dwarfs what they had in the past that before they even consider how to use it they still have to contextualize it, and they have to manage it and they have to organize it and there's data silos. So there's all this kind of nasty processes stuff, that's in the way some would argue has been kind of a real problem with the promise of BI, and does decision support tools. So as you look at at this new stuff and these new datasets, what are some of the people in process challenges beyond the obvious things that we can think about, which are the technical challenges? >> So I think that you've really hit on, something I talk about sometimes it was kind of a data deluge that we experienced these days, and the notion of feeling like you're drowning in information but really lacking any kind of insight. And one of the things that I like to think about, is to actually step back from the data questions the infrastructure questions, sort of all of these technical questions that can seem very challenging to navigate. And first ask ourselves, what problems am I trying to solve? It's really no different than any other type of decision you might make in an organization to say like, what are my biggest pain points? What keeps me up at night? or what would just transform the way my business works? And those are the problems worth solving. And then the next question becomes, if I had more data if I had a better understanding of something about my business or about my customers or about the world in which we all operate, would that really move the needle for me? And if the answer is yes, then that starts to give you a picture of what you might be able to do with AI, and it starts to tell you which of those data management challenges, whether they be cleaning the data, whether it be organizing the data, what it, whether it be building models on the data are worth solving because you're right, those are going to be a time intensive, labor intensive, highly iterative efforts. But if you know why you're doing it, then you will have a better understanding of why it's worth the effort. And also which shortcuts you can take which ones you can't, because often in order to sort of see the end state you might want to do a really quick experiment or prototype. And so you want to know what matters and what doesn't at least to that. Is this going to work at all time. >> So you're not buying the age old adage that you just throw a bunch of data in a data Lake and the answers will just spring up, just come right back out of the wall. I mean, you bring up such a good point, It's all about asking the right questions and thinking about asking questions. So again, when you talk to people, about helping them think about the questions, cause then you've got to shape the data to the question. And then you've got to start to build the algorithm, to kind of answer that question. How should people think when they're actually building algorithm and training algorithms, what are some of the typical kind of pitfalls that a lot of people fall in, haven't really thought about it before and how should people frame this process? Cause it's not simple and it's not easy and you really don't know that you have the answer, until you run multiple iterations and compare it against some other type of reference? >> Well, one of the things that I like to think about just so that you're sort of thinking about, all the challenges you're going to face up front, you don't necessarily need to solve all of these problems at the outset. But I think it's important to identify them, is I like to think about AI solutions as, they get deployed being part of a kind of workflow, and the workflow has multiple stages associated with it. The first stage being generating your data, and then starting to prepare and explore your data and then building models for your data. But sometimes I think where we don't always think about it is the next two phases, which is deploying whatever model or AI solution you've developed. And what will that really take especially in the ecosystem where it's going to live. If is it going to live in a secure and compliant ecosystem? Is it actually going to live in an outdoor ecosystem? We're seeing more applications on the edge, and then finally who's going to use it and how are they going to drive value from it? Because it could be that your AI solution doesn't work cause you don't have the right dashboard, that highlights and visualizes the data for the decision maker who will benefit from it. So I think it's important to sort of think through all of these stages upfront, and think through maybe what some of the biggest challenges you might encounter at the Mar, so that you're prepared when you meet them, and you can kind of refine and iterate along the way and even upfront tweak the question you're asking. >> That's great. So I want to get your take on we're celebrating Exascale day which is something very specific on 1018, share your thoughts on Exascale day specifically, but more generally I think just in terms of being a data scientist and suddenly having, all this massive compute power. At your disposal yoy're been around for a while. So you've seen the development of the cloud, these huge data sets and really the ability to, put so much compute horsepower against the problems as, networking and storage and compute, just asymptotically approach zero, I mean for as a data scientist you got to be pretty excited about kind of new mysteries, new adventures, new places to go, that we just you just couldn't do it 10 years ago five years ago, 15 years ago. >> Yeah I think that it's, it'll--only time will tell exactly all of the things that we'll be able to unlock, from these new sort of massive computing capabilities that we're going to have. But a couple of things that I'm very excited about, are that in addition to sort of this explosion or these very large investments in large supercomputers Exascale super computers, we're also seeing actually investment in these other types of scientific instruments that when I say scientific it's not just academic research, it's driving pharmaceutical drug discovery because we're talking about these, what they call light sources which shoot x-rays at molecules, and allow you to really understand the structure of the molecules. What Exascale allows you to do is, historically it's been that you would go take your molecule to one of these light sources and you shoot your, x-rays edit and you would generate just masses and masses of data, terabytes of data it was each shot. And being able to then understand, what you were looking at was a long process, getting computing time and analyzing the data. We're on the precipice of being able to do that, if not in real time much closer to real time. And I don't really know what happens if instead of coming up with a few molecules, taking them, studying them, and then saying maybe I need to do something different. I can do it while I'm still running my instrument. And I think that it's very exciting, from the perspective of someone who's got a scientific background who likes using large data sets. There's just a lot of possibility of what Exascale computing allows us to do in from the standpoint of I don't have to wait to get results, and I can either stimulate much bigger say galaxies, and really compare that to my data or galaxies or universes, if you're an astrophysicist or I can simulate, much smaller finer details of a hypothetical molecule and use that to predict what might be possible, from a materials or drug perspective, just to name two applications that I think Exascale could really drive. >> That's really great feedback just to shorten that compute loop. We had an interview earlier in some was talking about when the, biggest workload you had to worry about was the end of the month when you're running your financial, And I was like, why wouldn't that be nice to be the biggest job that we have to worry about? But now I think we saw some of this at animation, in the movie business when you know the rendering for whether it's a full animation movie, or just something that's a heavy duty three effects. When you can get those dailies back to the, to the artist as you said while you're still working, or closer to when you're working versus having this, huge kind of compute delay, it just changes the workflow dramatically and the pace of change and the pace of output. Because you're not context switching as much and you can really get back into it. That's a super point. I want to shift gears a little bit, and talk about explainable AI. So this is a concept that a lot of people hopefully are familiar with. So AI you build the algorithm it's in a box, it runs and it kicks out an answer. And one of the things that people talk about, is we should be able to go in and pull that algorithm apart to know, why it came out with the answer that it did. To me this just sounds really really hard because it's smart people like you, that are writing the algorithms the inputs and the and the data that feeds that thing, are super complex. The math behind it is very complex. And we know that the AI trains and can change over time as you you train the algorithm it gets more data, it adjusts itself. So it's explainable AI even possible? Is it possible at some degree? Because I do think it's important. And my next question is going to be about ethics, to know why something came out. And the other piece that becomes so much more important, is as we use that output not only to drive, human based decision that needs some more information, but increasingly moving it over to automation. So now you really want to know why did it do what it did explainable AI? Share your thoughts. >> It's a great question. And it's obviously a question that's on a lot of people's mind these days. I'm actually going to revert back to what I said earlier, when I talked about Bernoulli's principle, and just the ability sometimes when you do throw an algorithm at data, it might come the first thing it will find is probably some known law of physics. And so I think that really thinking about what do we mean by explainable AI, also requires us to think about what do we mean by AI? These days AI is often used anonymously with deep learning which is a particular type of algorithm that is not very analytical at its core. And what I mean by that is, other types of statistical machine learning models, have some underlying theory of what the population of data that you're studying. And whereas deep learning doesn't, it kind of just learns whatever pattern is sitting in front of it. And so there is a sense in which if you look at other types of algorithms, they are inherently explainable because you're choosing your algorithm based on what you think the is the sort of ground truth, about the population you're studying. And so I think we going to get to explainable deep learning. I think it's kind of challenging because you're always going to be in a position, where deep learning is designed to just be as flexible as possible. I'm sort of throw more math at the problem, because there may be are things that your sort of simpler model doesn't account for. However deep learning could be, part of an explainable AI solution. If for example, it helps you identify what are important so called features to look at what are the important aspects of your data. So I don't know it depends on what you mean by AI, but are you ever going to get to the point where, you don't need humans sort of interpreting outputs, and making some sets of judgments about what a set of computer algorithms that are processing data think. I think it will take, I don't want to say I know what's going to happen 50 years from now, but I think it'll take a little while to get to the point where you don't have, to maybe apply some subject matter understanding and some human judgment to what an algorithm is putting out. >> It's really interesting we had Dr. Robert Gates on a years ago at another show, and he talked about the only guns in the U.S. military if I'm getting this right, that are automatic, that will go based on what the computer tells them to do, and start shooting are on the Korean border. But short of that there's always a person involved, before anybody hits a button which begs a question cause we've seen this on the big data, kind of curve, i think Gartner has talked about it, as we move up from kind of descriptive analytics diagnostic analytics, predictive, and then prescriptive and then hopefully autonomous. So I wonder so you're saying will still little ways in that that last little bumps going to be tough to overcome to get to the true autonomy. >> I think so and you know it's going to be very application dependent as well. So it's an interesting example to use the DMZ because that is obviously also a very, mission critical I would say example but in general I think that you'll see autonomy. You already do see autonomy in certain places, where I would say the States are lower. So if I'm going to have some kind of recommendation engine, that suggests if you look at the sweater maybe like that one, the risk of getting that wrong. And so fully automating that as a little bit lower, because the risk is you don't buy the sweater. I lose a little bit of income I lose a little bit of revenue as a retailer, but the risk of I make that turn, because I'm going to autonomous vehicle as much higher. So I think that you will see the progression up that curve being highly dependent on what's at stake, with different degrees of automation. That being said you will also see in certain places where there's, it's either really expensive or it's humans aren't doing a great job. You may actually start to see some mission critical automation. But those would be the places where you're seeing them. And actually I think that's one of the reasons why you see actually a lot more autonomy, in the agriculture space, than you do in the sort of passenger vehicle space. Because there's a lot at stake and it's very difficult for human beings to sort of drive large combines. >> plus they have a real they have a controlled environment. So I've interviewed Caterpillar they're doing a ton of stuff with autonomy. Cause they're there control that field, where those things are operating, and whether it's a field or a mine, it's actually fascinating how far they've come with autonomy. But let me switch to a different industry that I know is closer to your heart, and looking at some other interviews and let's talk about diagnosing disease. And if we take something specific like reviewing x-rays where the computer, and it also brings in the whole computer vision and bringing in computer vision algorithms, excuse me they can see things probably fast or do a lot more comparisons, than potentially a human doctor can. And or hopefully this whole signal to noise conversation elevate the signal for the doctor to review, and suppress the noise it's really not worth their time. They can also review a lot of literature, and hopefully bring a broader potential perspective of potential diagnoses within a set of symptoms. You said before you both your folks are physicians, and there's a certain kind of magic, a nuance, almost like kind of more childlike exploration to try to get out of the algorithm if you will to think outside the box. I wonder if you can share that, synergy between using computers and AI and machine learning to do really arduous nasty things, like going through lots and lots and lots and lots of, x-rays compared to and how that helps with, doctor who's got a whole different kind of set of experience a whole different kind of empathy, whole different type of relationship with that patient, than just a bunch of pictures of their heart or their lungs. >> I think that one of the things is, and this kind of goes back to this question of, is AI for decision support versus automation? And I think that what AI can do, and what we're pretty good at these days, with computer vision is picking up on subtle patterns right now especially if you have a very large data set. So if I can train on lots of pictures of lungs, it's a lot easier for me to identify the pictures that somehow these are not like the other ones. And that can be helpful but I think then to really interpret what you're seeing and understand is this. Is it actually bad quality image? Is it some kind of some kind of medical issue? And what is the medical issue? I think that's where bringing in, a lot of different types of knowledge, and a lot of different pieces of information. Right now I think humans are a little bit better at doing that. And some of that's because I don't think we have great ways to train on, sort of sparse datasets I guess. And the second part is that human beings might be 40 years of training a model. They 50 years of training a model as opposed to six months, or something with sparse information. That's another thing that human beings have their sort of lived experience, and the data that they bring to bear, on any type of prediction or classification is actually more than just say what they saw in their medical training. It might be the people they've met, the places they've lived what have you. And I think that's that part that sort of broader set of learning, and how things that might not be related might actually be related to your understanding of what you're looking at. I think we've got a ways to go from a sort of artificial intelligence perspective and developed. >> But it is Exascale day. And we all know about the compound exponential curves on the computing side. But let's shift gears a little bit. I know you're interested in emerging technology to support this effort, and there's so much going on in terms of, kind of the atomization of compute store and networking to be able to break it down into smaller, smaller pieces, so that you can really scale the amount of horsepower that you need to apply to a problem, to very big or to very small. Obviously the stuff that you work is more big than small. Work on GPU a lot of activity there. So I wonder if you could share, some of the emerging technologies that you're excited about to bring again more tools to the task. >> I mean, one of the areas I personally spend a lot of my time exploring are, I guess this word gets used a lot, the Cambrian explosion of new AI accelerators. New types of chips that are really designed for different types of AI workloads. And as you sort of talked about going down, and it's almost in a way where we were sort of going back and looking at these large systems, but then exploring each little component on them, and trying to really optimize that or understand how that component contributes to the overall performance of the whole. And I think one of the things that just, I don't even know there's probably close to a hundred active vendors in the space of developing new processors, and new types of computer chips. I think one of the things that that points to is, we're moving in the direction of generally infrastructure heterogeneity. So it used to be when you built a system you probably had one type of processor, or you probably had a pretty uniform fabric across your system you usually had, I think maybe storage we started to get tearing a little bit earlier. But now I think that what we're going to see, and we're already starting to see it with Exascale systems where you've got GPUs and CPUs on the same blades, is we're starting to see as the workloads that are running at large scales are becoming more complicated. Maybe I'm doing some simulation and then I'm running I'm training some kind of AI model, and then I'm inferring it on some other type, some other output of the simulation. I need to have the ability to do a lot of different things, and do them in at a very advanced level. Which means I need very specialized technology to do it. And I think it's an exciting time. And I think we're going to test, we're going to break a lot of things. I probably shouldn't say that in this interview, but I'm hopeful that we're going to break some stuff. We're going to push all these systems to the limit, and find out where we actually need to push a little harder. And I some of the areas I think that we're going to see that, is there We're going to want to move data, and move data off of scientific instruments, into computing, into memory, into a lot of different places. And I'm really excited to see how it plays out, and what you can do and where the limits are of what you can do with the new systems. >> Arti I could talk to you all day. I love the experience and the perspective, cause you've been doing this for a long time. So I'm going to give you the final word before we sign out and really bring it back, to a more human thing which is ethics. So one of the conversations we hear all the time, is that if you are going to do something, if you're going to put together a project and you justify that project, and then you go and you collect the data and you run that algorithm and you do that project. That's great but there's like an inherent problem with, kind of data collection that may be used for something else down the road that maybe you don't even anticipate. So I just wonder if you can share, kind of top level kind of ethical take on how data scientists specifically, and then ultimately more business practitioners and other people that don't carry that title. Need to be thinking about ethics and not just kind of forget about it. That these are I had a great interview with Paul Doherty. Everybody's data is not just their data, it's it represents a person, It's a representation of what they do and how they lives. So when you think about kind of entering into a project and getting started, what do you think about in terms of the ethical considerations and how should people be cautious that they don't go places that they probably shouldn't go? >> I think that's a great question out a short answer. But I think that I honestly don't know that we have a great solutions right now, but I think that the best we can do is take a very multifaceted, and also vigilant approach to it. So when you're collecting data, and often we should remember a lot of the data that gets used isn't necessarily collected for the purpose it's being used, because we might be looking at old medical records, or old any kind of transactional records whether it be from a government or a business. And so as you start to collect data or build solutions, try to think through who are all the people who might use it? And what are the possible ways in which it could be misused? And also I encourage people to think backwards. What were the biases in place that when the data were collected, you see this a lot in the criminal justice space is the historical records reflect, historical biases in our systems. And so is I there are limits to how much you can correct for previous biases, but there are some ways to do it, but you can't do it if you're not thinking about it. So I think, sort of at the outset of developing solutions, that's important but I think equally important is putting in the systems to maintain the vigilance around it. So one don't move to autonomy before you know, what potential new errors you might or new biases you might introduce into the world. And also have systems in place to constantly ask these questions. Am I perpetuating things I don't want to perpetuate? Or how can I correct for them? And be willing to scrap your system and start from scratch if you need to. >> Well Arti thank you. Thank you so much for your time. Like I said I could talk to you for days and days and days. I love the perspective and the insight and the thoughtfulness. So thank you for sharing your thoughts, as we celebrate Exascale day. >> Thank you for having me. >> My pleasure thank you. All right she's Arti I'm Jeff it's Exascale day. We're covering on the queue thanks for watching. We'll see you next time. (bright upbeat music)
SUMMARY :
Narrator: From around the globe, Arti great to see you again. I just love if you can share a little bit And I think it's an advantage, and you probably run with and that's really important to making and they have to manage it and it starts to tell you which of those the data to the question. and then starting to prepare that we just you just and really compare that to my and pull that algorithm apart to know, and some human judgment to what the computer tells them to do, because the risk is you the doctor to review, and the data that they bring to bear, and networking to be able to break it down And I some of the areas I think Arti I could talk to you all day. in the systems to maintain and the thoughtfulness. We're covering on the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
50 years | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
Paul Doherty | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
Arti | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
Bernoulli | PERSON | 0.99+ |
Arti Garg | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
hundreds of years | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
1018 | DATE | 0.98+ |
Dr. | PERSON | 0.98+ |
Exascale | TITLE | 0.98+ |
each shot | QUANTITY | 0.98+ |
Caterpillar | ORGANIZATION | 0.98+ |
Robert Gates | PERSON | 0.98+ |
15 years ago | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
HPE | ORGANIZATION | 0.98+ |
first stage | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
five years ago | DATE | 0.95+ |
Exascale day | EVENT | 0.95+ |
two applications | QUANTITY | 0.94+ |
October 18th | DATE | 0.94+ |
two phases | QUANTITY | 0.92+ |
18th | DATE | 0.91+ |
10 | DATE | 0.9+ |
one thing | QUANTITY | 0.86+ |
U.S. military | ORGANIZATION | 0.82+ |
one type | QUANTITY | 0.81+ |
a years ago | DATE | 0.81+ |
each little component | QUANTITY | 0.79+ |
single path | QUANTITY | 0.79+ |
Korean border | LOCATION | 0.72+ |
hundred | QUANTITY | 0.71+ |
terabytes of data | QUANTITY | 0.71+ |
18 zeros | QUANTITY | 0.71+ |
three effects | QUANTITY | 0.68+ |
one of these light | QUANTITY | 0.68+ |
Exascale Day | EVENT | 0.68+ |
Exascale | EVENT | 0.67+ |
things | QUANTITY | 0.66+ |
Cray | ORGANIZATION | 0.61+ |
Exascale day 10 | EVENT | 0.6+ |
Lawrence Livermore | PERSON | 0.56+ |
vendors | QUANTITY | 0.53+ |
few | QUANTITY | 0.52+ |
reasons | QUANTITY | 0.46+ |
lots | QUANTITY | 0.46+ |
Cambrian | OTHER | 0.43+ |
DMZ | ORGANIZATION | 0.41+ |
Exascale | COMMERCIAL_ITEM | 0.39+ |
Keynote | Red Hat Summit 2019 | DAY 2 Morning
>> Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Paul Cormier. Boring. >> Welcome back to Boston. Welcome back. And welcome back after a great night last night of our opening with with Jim and talking to certainly saw ten Jenny and and especially our customers. It was so great last night to hear our customers in how they set their their goals and how they met their goals. All possible because certainly with a little help from red hat, but all possible because of because of open source. And, you know, sometimes we have to all due that has set goals. And I'm going to talk this morning about what we as a company and with community, have set for our goals along the way. And sometimes you have to do that. You know, audacious goals. It can really change the perception of what's even possible. And, you know, if I look back, I can't think of anything, at least in my lifetime, that's more important. Or such a big golden John F. Kennedy setting the gold to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about in compare and contrast, a little bit of where we are technically at that time, you know, tto win and to beat and winning the space race and even get into the space race. There was some really big technical challenges along the way. I mean, believe it or not. Not that long ago. But even But back then, math Malik mathematical calculations were being shifted from from brilliant people who we trusted, and you could look in the eye to A to a computer that was programmed with the results that were mostly printed out. This this is a time where the potential of computers was just really coming on the scene and, at the time, the space race at the time of space race it. It revolved around an IBM seventy ninety, which was one of the first transistor based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges And while we had the goal of in the beginning of the technique and the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case. Back in nineteen sixty nine, the seven individuals that made up the Mercury Space crew were putting their their lives in the hands of those first computers. But on Sunday, July twentieth, nineteen sixty nine, these things all came together. The goal, the technology in the team and a human being walked on the moon. You know, if this was possible fifty years ago, just think about what Khun B. Accomplished today, where technology is part of our everyday lives. And with technology advances at an ever increasing rate, it's hard to comprehend the potential that sitting right at our fingertips every single day, everything you know about computing is continuing to change. Today, let's look a bit it back. A computing In nineteen sixty nine, the IBM seventy ninety could process one hundred thousand floating point operations per second, today's Xbox one that sitting in most of your living rooms probably can process six trillion flops. That's sixty million times more powerful than the original seventy ninety that helped put a human being on the moon. And at the same time that computing was, that was drastically changed. That this computing has drastically changed. So have the boundaries of where that computing sits and where it's been where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center, and over time that grew to multiple data centers. Then with cloud, it extended all the way out to data centers that you didn't even own or have control of. But but computing now reaches far beyond any data center. This is also referred to as the edge. You hear a lot about that. The Apollo's, the Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed seventy pounds embedded in the capsule. Today, today the edge is right here on my wrist. This apple watch weighs just a couple of ounces, and it's ten ten thousand times more powerful than that seventy ninety back in nineteen sixty nine But even more impactful than computing advances, combined with the pervasive availability of it, are the changes and who in what controls those that similar to social changes that have happened along the way. Shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms. Operational control was your team, your team within your control? In some cases, a single person managed everything. But as complexity grows, our team's expanded, just like in the just like in the computing boundaries, system integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions going forward with the progress of things like a I and software defined everything. It's quite likely that machines will be managing machines, and in many cases that's already happening today. But while the technology at our finger tips today is so impressive, the pace of changing complexity of the problems we aspire to solve our equally hard to comprehend and they are all intertwined with one another learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity beyond anyone beyond what any one single company or even one single country Khun solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open sources so needed today, even in the world, to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction. Today is to bring two brother to bring together the best innovation from every corner of the planet. Toe fundamentally change how we solve problems. This approach and access the innovation is what has enabled open source To tackle The challenge is big challenges, like creating the hybrid cloud like building a truly open hybrid cloud. But even today it's really difficult to bridge the gap of the innovation. It's available in all in all of our fingertips by open source development, while providing the production level capabilities that are needed to really dip, ploy this in the enterprise and solve RIA world business problems. Red Hat has been committed to open source from the very, very beginning and bringing it to solve enterprise class problems for the last seventeen plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway tow harness the innovation. We had to fully embrace the model. We made a decision very early on. Give everything back and we live by that every single day. We didn't do crazy crazy things like you hear so many do out there. All this is open corps or everything below. The line is open and everything above the line is closed. We didn't do that, and we gave everything back Everything we learned in the process of becoming an enterprise class technology company. We gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible within open source development model we've been building on the foundation of open source is most successful Project Lennox in the architecture of the future hybrid and bringing them to the Enterprise. This is what made Red Hat, the company that we are today and red hats journey. But we also had the set goals, and and many of them seemed insert insurmountable at the time, the first of which was making Lennox the Enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the Enterprise was rail two dot one. Yes, I know we two dot one, but we knew we couldn't release a one dato product. We knew that and and we didn't. But >> we didn't want to >> allow any reason why anyone of any customer anyone shouldn't should look past rail to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and Big Eyes v partners that supported Rehl out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our on our priorities. In the early days of rail, I remember going to ask one of our engineers for a past rehl build because we were having a customer issue on it on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it here It is told me not to worry that the build this was he thinks this was the bill. This was the right one, and at that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. The not only convinced the world that Lennox was secure, stable, an enterprise ready, but also to make that a reality. But we did. And today this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise, Lennox has continually risen to the challenge and has become the core foundation that many mission critical customers run and bet their business on. And an even bigger today Lennox is the foundation of which practically every single technology initiative is built upon. Lennox is not only standard toe build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. We started our story with rail two dot one, and here we are today, seventeen years later, announcing rally as we did as we did last night. It's specifically designed for applications to run across the open hybrid. Clyde Cloud. Railed has become the best operating simp system for on premise all the way out to the cloud, providing that common operating model and workload foundation on which to build hybrid applications. Let's take it. Let's take a look at how far we've come and see this in action. >> Please welcome Red Hat Global director of developer experience, burst Sutter with Josh Boyer, Timothy Kramer, Lars Carl, it's Key and Brent Midwood. All right, we have some amazing things to show you. In just a few short moments, we actually have a lot of things to show you. And actually, Tim and Brandt will be with us momentarily. They're working out a few things in the back because we have a lot of this is gonna be a live demonstration, some incredible capabilities. Now you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vast cities. You're free to manage many, many machines. I want you thinking about that as we go to this process. Now, also, keep in mind that this is the basis our core platform for everything we do here. Red hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize the many of you in the audience right now. Her hand's on systems administrators, systems, architect, citizens, engineers. And we know that you're under ever growing pressure to deliver needed infrastructure. Resource is ever faster, and that is a key element to what you're thinking about every day. Well, this has been a core theme, and our design decisions find red Odd Enterprise Lennox eight and intelligent operating system, which is making it fundamentally easier for you manage machines that scale. So hold what you're about to see next. Feels like a new superpower and and that redhead azure force multiplier. So first, let me introduce you to a large. He's totally my limits guru. >> I wouldn't call myself a girl, but I I guess you could say that I want to bring Lennox and light meant to more people. >> Okay, Well, let's let's dive in. And we're not about the clinic's eight. >> Sure. Let me go. And Morgan, >> wait a >> second. There's windows. >> Yeah, way Build the weft Consul into Really? That means that for the first time, you can log in from any device including your phone or this standard windows laptop. So you just go ahead and and to my Saturday lance credentials here. >> Okay, so now >> you're putting >> your limits password and over the web. >> Yeah, that might sound a bit scary at first, but of course, we're using the latest security tech by T. L s on dh csp on. Because that's the standard Lennox off site. You can use everything that you used to like a stage keys, OTP, tokens and stuff like this. >> Okay, so now I see the council right here. I love the dashboard overview of the system, but what else can you tell us about this council? >> Right? Like right here. You see the load of the system, some some of its properties. But you can also dive into logs everything that you're used to from the command line, right? Or lookit, services. This's all the services I've running, can start and stuff them and enable >> OK, I love that feature right there. So what about if I have to add a whole new application to this environment? >> Good that you're bringing that up. We build a new future into hell called application streams. Which the way for you to install different versions of your half stack that are supported I'LL show you with Youngmin a command line. But since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the Web console Since the browser, I can even make this a bit bigger. Go to, for example, to see the application streams that we have for Poskus. Ijust do module list and I see you know we have ten and nine dot six Both supported tennis a default on defy enable ninety six Now the next time that I installed prescribes it will pull all their lady towards from them at six. >> Ok, so this is very cool. I see two verses of post Chris right here What tennis to default. That is fantastic and the application streams making that happen. But I'm really kind of curious, right? I loved using know js and Java. So what about multiple versions of those? >> Yeah, that's exactly the idea way. Want to keep up with the fast moving ecosystems off programming language? Isn't it a business? >> Okay, now, But I have another key question. I know some people were thinking it right now. What about Python? >> Yeah. In fact, in a minimum and still like this, python gives you command. Not fact. Just have to type it correctly. You can't just install which everyone you want two or three or whichever your application needs. >> Okay, Well, that is I've been burned on that one before. Okay, so no actual. Have a confession for all you guys. Right here. You guys keep this amongst yourselves. Don't let Paul No, I'm actually not a linnet systems administrator. I'm an application developer, an application architect, And I recently had to go figure out how to extend the file system. This is for real. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, extend resized to f s. And I have to admit, that's hard, >> right? I've opened the storage space for you right here, where you see an overview of your storage. And the council has made for people like you as well not only for people that I knew that when you two lunatics, right? It's if you're running, you're running some of the commands only, you know, some of the time you don't remember them. So, for example, I haven't felt twosome here. That's a little bit too small. Let me just throw it. It's like, you know, dragging this lighter. It calls all the command in the background for you. >> Oh, that is incredible. Is that simple? Just drag and drop. That is fantastic. Well, so I actually, you know, we'll have another question for you. It looks like now this linen systems administration is no longer a dark heart involving arcane commands typed into a black terminal. Like using when those funky ergonomic keyboards you know I'm talking about right? Do >> you know a lot of people, including me and people in the audience like that dark out right? And this is not taking any of that away. It's on additional tool to bring limits to more people. >> Okay, well, that is absolute fantastic. Thank you so much for that Large. And I really love him installing everything is so much easier, including a post gra seeker and, of course, the python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Lenox system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So, Josh, how would I create a golden image? One VM image that can use and we have everything pre baked in? >> Yeah, absolutely. But >> we get that question all the time. So really includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate new system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints or what we used to actually control it goes into our golden image. Uh, and I heard you and Lars talking about post present python. So I went and started typing here. So it brings us to this page, but you could go to the selected components, and you can see here I've created a blueprint that has all the python and post press packages in it. Ah, and the interesting thing about this is it build on our existing kickstart technology. But you can use it to deploy that whatever cloud you want. And it's saved so that you don't actually have to know all the various incantations from Amazon toe azure to Google, whatever it's all baked in on. When you do this, you can actually see the dependencies that get brought in as well. Okay. Should we create one life? Yes, please. All right, cool. So if we go back to the blueprints page and we click create blueprint Let's, uh let's make a developer brute blueprint here. So we click great, and you can see here on the left hand side. I've got all of my content served up by Red Hat satellite. We have a lot of great stuff, and really, But we can go ahead and search. So we'LL look for post grows and you know, it's a developer image at the client for some local testing. Um, well, come in here and at the python bits. Probably the development package. We need a compiler if we're going to actually build anything. So look for GCC here and hey, what's your favorite editor? >> A Max, Of course, >> Max. All right. Hey, Lars, about you. I'm more of a person. You Maxim v I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. But we're going to go ahead and Adam Ball, sweetie, I'm a fight on stage. So wait, just point and click. Let the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. >> Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily deploys of deploy this across multiple cloud providers. And as well as this on stage are where we have right now. >> Yeah, absolutely. We can to play on Amazon as your google any any infrastructure you're looking for so you can really hit your Clyburn hybrid cloud operating system images. >> Okay. All right, listen, we >> just go on, click, create image. Uh, we can select our different types here. I'm gonna go ahead and create a local VM because it's available image, and maybe they want to pass it around or whatever, and I just need a few moments for it to build. >> Okay? So while that's taking a few moments, I know there's another key question in the minds of the audience right now, and you're probably thinking I love what I see. What Right eye right hand Priceline say. But >> what does it >> take to upgrade from seven to eight? So large can you show us and walk us through an upgrade? >> Sure, this's my little Thomas Block that I set up. It's powered by what Chris and secrets over, but it's still running on seven six. So let's upgrade that jump over to my house fee on satellite on. You see all my relate machines here, including the one I showed you what Consul on before. And there is that one with my sun block and there's a couple others. Let me select those as well. This one on that one. Just go up here. Schedule remote job. And she was really great. And hit Submit. I made it so that it makes the booms national before. So if anything was wrong Kans throwback! >> Okay, okay, so now it's progressing. Here, >> it's progressing. Looks like it's running. Doing >> live upgrade on stage. Uh, >> seems like one is failing. What's going on here? Okay, we checked the tree of great Chuck. Oh, yeah, that's the one I was playing around with Butter fest backstage. What? Detective that and you know, it doesn't run the Afghan cause we don't support operating that. >> Okay, so what I'm hearing now? So the good news is, we were protected from possible failed upgrade there, So it sounds like these upgrades are perfectly safe. Aiken, basically, you know, schedule this during a maintenance window and still get some sleep. >> Totally. That's the idea. >> Okay, fantastic. All right. So it looks like upgrades are easy and perfectly safe. And I really love what you showed us there. It's good point. Click operation right from satellite. Ok, so Well, you know, we were checking out upgrades. I want to know Josh. How those v ems coming along. >> They went really well. So you were away for so long. I got a little bored and I took some liberties. >> What do you mean? >> Well, the image Bill And, you know, I decided I'm going to go ahead and deploy here to this Intel machine on stage Esso. I have that up and running in the web. Counsel. I built another one on the arm box, which is actually pretty fast, and that's up and running on this. Our machine on that went so well that I decided to spend up some an Amazon. So I've got a few instances here running an Amazon with the web console accessible there as well. On even more of our pre bill image is up and running an azure with the web console there. So the really cool thing about this bird is that all of these images were built with image builder in a single location, controlling all the content that you want in your golden images deployed across the hybrid cloud. >> Wow, that is fantastic. And you might think that so we actually have more to show you. So thank you so much for that large. And Josh, that is fantastic. Looks like provisioning bread. Enterprise Clinic Systems ate a redhead. Enterprise Enterprise. Rhetta Enterprise Lennox. Eight Systems is Asian ever before, but >> we have >> more to talk to you about. And there's one thing that many of the operations professionals in this room right now no, that provisioning of'em is easy, but it's really day two day three, it's down the road that those viens required day to day maintenance. As a matter of fact, several you folks right now in this audience to have to manage hundreds, if not thousands, of virtual machines I recently spoke to. Gentleman has to manage thirteen hundred servers. So how do you manage those machines? A great scale. So great that they have now joined us is that it looks like they worked things out. So now I'm curious, Tim. How will we manage hundreds, if not thousands, of computers? >> Welbourne, one human managing hundreds or even thousands of'em says, No problem, because we have Ansel automation. And by leveraging Ansel's integration into satellite, not only can we spin up those V em's really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory and his red hat is publishing patches. Weaken with that danceable integration easily apply those patches across our entire fleet of machines. Okay, >> that is fantastic. So he's all the machines can get updated in one fell swoop. >> He sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud that red hat dot com And here, a cloud that redhead dot com You can view and manage your entire inventory no matter where it sits. Of Redhead Enterprise Lennox like on Prem on stage. Private Cloud or Public Cloud. It's true Hybrid cloud management. >> OK, but one thing. One thing. I know that in the minds of the audience right now. And if you have to manage a large number servers this it comes up again and again. What happens when you have those critical vulnerabilities that next zero day CV could be tomorrow? >> Exactly. I've actually been waiting for a while patiently for you >> to get to the really good stuff. So >> there's one more thing that I wanted to let folks know about. Red Hat Enterprise. The >> next eight and some features that we have there. Oh, >> yeah? What is that? >> So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our red hat Enterprise Lennox servers running securely, inefficiently. And so what we actually have here is a few things that we could take a look at show folks what that is. >> OK, so we basically have this new feature. We're going to show people right now. And so one thing I want to make sure it's absolutely included within the redhead enterprise in that state. >> Yes. Oh, that's Ah, that's an announcement that we're making this week is that this is a brand new feature that's integrated with Red Hat Enterprise clinics, and it's available to everybody that has a red hat enterprise like subscription. So >> I believe everyone in this room right now has a rail subscriptions, so it's available to all of them. >> Absolutely, absolutely. So let's take a quick look and try this out. So we actually have. Here is a list of about six hundred rules. They're configuration security and performance rules. And this is this list is growing every single day, so customers can actually opt in to the rules that are most that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt into sending us. So customers have opted in and are sending us more data every single night. Then they actually have in total over the last twenty years via any other mechanism. >> Now there's I see now there's some critical findings. That's what I was talking about. But it comes to CVS and things that nature. >> Yeah, I'm betting that those air probably some of the rail seven boxes that we haven't actually upgraded quite yet. So we get back to that. What? I'd really like to show everybody here because everybody has access to this is how easy it is to opt in and enable this feature for real. Okay, let's do that real quick, so I gotta hop back over to satellite here. This is the satellite that we saw before, and I'll grab one of the hosts and we can use the new Web console feature that's part of Railly, and via single sign on I could jump right from satellite over to the Web console. So it's really, really easy. And I'LL grab a terminal here and registering with insights is really, really easy. Is one command troops, and what's happening right now is the box is going to gather some data. It's going to send it up to the cloud, and within just a minute or two, we're gonna have some results that we can look at back on the Web interface. >> I love it so it's just a single command and you're ready to register this box right now. That is super easy. Well, that's fantastic, >> Brent. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Lennox eight was the easiest, most economical and smartest operating system on the planet, period. And well, I think it's cute how you can go ahead and captain on a single machine. I'm going to show you one more thing. This is Answerable Tower. You can use as a bell tower to managing govern your answerable playbook, usage across your entire organization and with this. What I could do is on every single VM that was spun up here today. Opt in and register insights with a single click of a button. >> Okay, I want to see that right now. I know everyone's waiting for it as well, But hey, you're VM is ready. Josh. Lars? >> Yeah. My clock is running a little late now. Yeah, insights is a really cool feature >> of rail. And I've got it in all my images already. All >> right, I'm doing it all right. And so as this playbook runs across the inventory, I can see the machines registering on cloud that redhead dot com ready to be managed. >> OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, fantastic. >> That's awesome. Thanks to him. Nothing better than a Red Hat Summit speaker in the first live demo going off script deal. Uh, let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular deanna's mask issue. It's going to affect a couple of machines. We saw that in the overview, and I can actually go and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood an impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use answerable to mediate this so I can grab the machines will select those two and we're mediate with answerable. I can create a new playbook. It's our maintenance window, but we'LL do something along the lines of like stuff Tim broke and that'LL be our cause. We name it whatever we want. So we'Ll create that playbook and take a look at it, and it's actually going to give us some details about the machines. You know what, what type of reboots Efendi you're going to be needed and what we need here. So we'LL go ahead and execute the playbook and what you're going to see is the outputs goingto happen in real time. So this is happening from the cloud were affecting machines. No matter where they are, they could be on Prem. They could be in a hybrid cloud, a public cloud or in a private cloud. And these things are gonna be remediated very, very easily with answerable. So it's really, really awesome. Everybody here with a red hat. Enterprise licks Lennox subscription has access to this now, so I >> kind of want >> everybody to go try this like, we really need to get this thing going and try it out right now. But >> don't know, sent about the room just yet. You get stay here >> for okay, Mr. Excitability, I think after this keynote, come back to the red hat booth and there's an optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out. Answer any questions you might have. So >> this is really the start of a new era with an intelligent operating system and beauty with intelligence you just saw right now what insights that troubles you. Fantastic. So we're enabling systems administrators to manage more red in private clinics, a greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point, and we kind of, you know, when a little bit sideways here moments. But we need to get off the stage. But there's one thing I want you guys to think about it. All right? Do come check out the in the booth. Like Tim just said also in our debs, Get hands on red and a prize winning state as well. But really, I want you to think about this one human and a multitude of servers. And if you remember that one thing asked you upfront. Do you feel like you get a new superpower and redhead? Is your force multiplier? All right, well, thank you so much. Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on stage. >> I went brilliant. No, it's just as always, >> amazing. I mean, as you can tell from last night were really, really proud of relate in that coming out here at the summit. And what a great way to showcase it. Thanks so much to you. Birth. Thanks, Brent. Tim, Lars and Josh. Just thanks again. So you've just seen this team demonstrate how impactful rail Khun b on your data center. So hopefully hopefully many of you. If not all of you have experienced that as well. But it was super computers. We hear about that all the time, as I just told you a few minutes ago, Lennox isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest super computers in the world. In our next guest is here to tell us a lot more about that. >> Please welcome Lawrence Livermore National Laboratory. HPC solution Architect Robin Goldstone. >> Thank you so much, Robin. >> So welcome. Welcome to the summit. Welcome to Boston. And thank thank you so much for coming for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high high performance computing really works at this level? >> Sure. So Lawrence Livermore National >> Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science and technology and high performance computing has always been one of our core capabilities. In fact, our very first supercomputer, ah Univac one was ordered by Edward Teller before our lab even opened back in nineteen fifty two. Our mission has evolved since then to cover a broad range of national security challenges. But first and foremost, our job is to ensure the safety, security and reliability of the nation's nuclear weapons stockpile. Oh, since the US no longer performs underground nuclear testing, our ability to certify the stockpile depends heavily on science based science space methods. We rely on H P C to simulate the behavior of complex weapons systems to ensure that they can function as expected, well beyond their intended life spans. That's actually great. >> So are you really are still running on that on that Univac? >> No, Actually, we we've moved on since then. So Sierra is Lawrence Livermore. Its latest and greatest supercomputer is currently the Seconds spastic supercomputer in the world and for the geeks in the audience, I think there's a few of them out there. We put up some of the specs of Syrah on the screen behind me, a couple of things worth highlighting our Sierra's peak performance and its power utilisation. So one hundred twenty five Pata flops of performance is equivalent to about twenty thousand of those Xbox one excess that you mentioned earlier and eleven point six megawatts of power required Operate Sierra is enough to power around eleven thousand homes. Syria is a very large and complex system, but underneath it all, it starts out as a collection of servers running Lin IX and more specifically, rail. >> So did Lawrence. Did Lawrence Livermore National Lab National Lab used Yisrael before >> Sierra? Oh, yeah, most definitely. So we've been running rail for a very long time on what I'll call our mid range HPC systems. So these clusters, built from commodity components, are sort of the bread and butter of our computer center. And running rail on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware. Also between Lawrence Livermore in our sister labs, Los Alamos and Sandia. Alongside these commodity clusters, though, we've always had one sort of world class supercomputer like Sierra. Historically, these systems have been built for a sort of exotic proprietary hardware running entirely closed source operating systems. Anytime something broke, which was often the Vander would be on the hook to fix it. And you know, >> that sounds >> like a good model, except that what we found overtime is most the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified codes. So their ability to reproduce our problem was was pretty limited. In some cases, they've even sent an engineer on site to try to reproduce our problems. But even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. >> So for many of us, for many of us, the challenges is one of driving reasons for open source, you know, for even open source existing. How has how did Sierra change? Things are on open source for >> you. Sure. So when we developed our technical requirements for Sierra, we had an explicit requirement that we want to run an open source operating system and a strong preference for rail. At the time, IBM was working with red hat toe add support Terrell for their new little Indian power architecture. So it was really just natural for them to bid a red. A rail bay system for Sierra running Raylan Cyril allows us to leverage the model that's worked so well for us for all this time on our commodity clusters any packages that we build for X eighty six, we can now build those packages for power as well as our market texture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in house colonel developers to help debug complex problems are sys. Admin is Khun now work on any of our systems, including Sierra, without having toe pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo es fenders. >> You know, you've been able, you've been able to extend your foundation from all the way from X eighty six all all the way to the extract excess Excuse scale supercomputing. We talk about giving customers all we talked about it all the time. A standard operational foundation to build upon. This isn't This isn't exactly what we've envisioned. So So what's next for you >> guys? Right. So what's next? So Sierra's just now going into production. But even so, we're already working on the contract for our next supercomputer called El Capitan. That's scheduled to be delivered the Lawrence Livermore in the twenty twenty two twenty timeframe. El Capitan is expected to be about ten times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation. That relish provided us for well over a decade. >> Well, thank you so much for your support of realm over the years, Robin. And And thank you so much for coming and tell us about it today. And we can't wait to hear more about El Capitan. Thank you. Thank you very much. So now you know why we're so proud of realm. And while you saw confetti cannons and T shirt cannons last night, um, so you know, as as burned the team talked about the demo rail is the force multiplier for servers. We've made Lennox one of the most powerful platforms in the history of platforms. But just as Lennox has become a viable platform with access for everyone, and rail has become viable, more viable every day in the enterprise open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with Ralph seeing the incredible progress of software development occurring around Lennox. Let's let's lead us to the next goal that we said tow, tow ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many? How many of you out here in the audience or are Cesar are? HC sees how many out there a lot. A lot. You are the people that our building the next generation of computing the hybrid cloud, you know, again with like just like our goals around Lennox. This goals might seem a little daunting in the beginning, but as a community we've proved it time and time again. We are unstoppable. Let's talk a bit about what got us to the point we're at right right now and in the work that, as always, we still have in front of us. We've been on a decade long mission on this. Believe it or not, this mission was to build the capabilities needed around the Lenox operating system to really build and make the hybrid cloud. When we saw well, first taking hold in the enterprise, we knew that was just taking the first step. Because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build, to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of J. Boss many years ago, all the way to the new Cuban Eddie's native code ready workspaces we launched just a few months back. We realized very early on that building a developer friendly platform was critical to the success of Lennox and open source in the enterprise. Shortly after this, the public cloud stormed onto the scene while our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. Rehl very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely re sought and re architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running in how they were running, regardless of which infrastructure provider they they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms to cloud forms, insights and now answerable. We focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked toe applications. Enterprise class applications tend to create tons of data and to have a common operating platform foyer applications. You need a storage solutions. That's Justus, flexible as that platform able to run on premise. Just a CZ. Well, as in the cloud, even across multiple clouds. This let us tow acquisitions like bluster, SEF perma bitch in Nubia, complimenting our Pratt platform with red hat storage for us, even though this sounds very condensed, this was a decade's worth of investment, all in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer would depend on to deploy riel hybrid cloud architectures, finding any finding an amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions. When that necessarily wasn't available by twenty fourteen, our foundation had expanded, but one big challenge remained workload portability. Virtual machine formats were fragmented across the various deployments and higher level framework such as Java e still very much depended on a significant amount of operating system configuration and then containers happened containers, despite having a very long being in existence for a very long time. As a technology exploded on the scene in twenty fourteen, Cooper Netease followed shortly after in twenty fifteen, allowing containers to span multiple locations and in one fell swoop containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the on ly practical reality in way for customers and a red hat. We've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you both our customers and our partners building critical realm in open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've and while we've amassed a pretty big amount of expertise to solve most any challenge in in any area that stack, it takes more than just our own learning's to build the next generation platform. Today we're also introducing open shit for which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply just with our customers in mind. This is something that could only be possible in an open source development model and just like relish the force multiplier for servers. Open shift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them its scale. And we've also announced open shift, and we've also announced azure open shift. Last night. Satya on this stage talked about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud, regardless of whether you run it yourself or just consume it as a service. And with this flagship release, we are also introducing operators, which is the central, which is the central feature here. We talked about this work last year with the operator framework, and today we're not going to just show you today. We're not going to just show you open shift for we're going to show you operators running at scale operators that will do updates and patches for you, letting you focus more of your time and running your infrastructure and running running your business. We want to make all this easier and intuitive. So let's have a quick look at how we're doing. Just that >> painting. I know all of you have heard we're talking to pretend to new >> customers about the travel out. So new plan. Just open it up as a service been launched by this summer. Look, I know this is a big quest for not very big team. I'm open to any and all ideas. >> Please welcome back to the stage. Red Hat Global director of developer Experience burst Sutter with Jessica Forrester and Daniel McPherson. All right, we're ready to do some more now. Now. Earlier we showed you read Enterprise Clinic St running on lots of different hardware like this hardware you see right now And we're also running across multiple cloud providers. But now we're going to move to another world of Lennox Containers. This is where you see open shift four on how you can manage large clusters of applications from eggs limits containers across the hybrid cloud. We're going to see this is where suffer operators fundamentally empower human operators and especially make ups and Deb work efficiently, more efficiently and effectively there together than ever before. Rights. We have to focus on the stage right now. They're represent ops in death, and we're gonna go see how they reeled in application together. Okay, so let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today, and he's telling my ops, comfort person Let's go to call him Mr Ops. So Dan, >> thanks for with open before, we had a much easier time setting up in maintaining our clusters. In large part, that's because open shit for has extended management of the clusters down to the infrastructure, the diversity kinds of parent. When you take >> a look at the open ship console, >> you can now see the machines that make up the cluster where machine represents the infrastructure. Underneath that Cooper, Eddie's node open shit for now handles provisioning Andy provisioning of those machines. From there, you could dig into it open ship node and see how it's configured and monitor how it's behaving. So >> I'm curious, >> though it does this work on bare metal infrastructure as well as virtualized infrastructure. >> Yeah, that's right. Burn So Pa Journal nodes, no eternal machines and open shit for can now manage it all. Something else we found extremely useful about open ship for is that it now has the ability to update itself. We can see this cluster hasn't update available and at the press of a button. Upgrades are responsible for updating. The entire platform includes the nodes, the control plane and even the operating system and real core arrests. All of this is possible because the infrastructure components and their configuration is now controlled by technology called operators. Thes software operators are responsible for aligning the cluster to a desired state. And all of this makes operational management of unopened ship cluster much simpler than ever before. All right, I >> love the fact that all that's been on one console Now you can see the full stack right all way down to the bare metal right there in that one console. Fantastic. So I wanted to scare us for a moment, though. And now let's talk to Deva, right? So Jessica here represents our all our developers in the room as my facts. He manages a large team of developers here Red hat. But more importantly, she represents our vice president development and has a large team that she has to worry about on a regular basis of Jessica. What can you show us? We'LL burn My team has hundreds of developers and were constantly under pressure to deliver value to our business. And frankly, we can't really wait for Dan and his ops team to provisioned the infrastructure and the services that we need to do our job. So we've chosen open shift as our platform to run our applications on. But until recently, we really struggled to find a reliable source of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us install through the cluster. But now, with operator, How bio, we're really seeing the V ecosystem be unlocked. And the technology's there. Things that my team needs, its databases and message cues tracing and monitoring. And these operators are actually responsible for complex applications like Prometheus here. Okay, they're written in a variety of languages, danceable, but that is awesome. So I do see a number of options there already, and preaches is a great example. But >> how do you >> know that one? These operators really is mature enough and robust enough for Dan and the outside of the house. Wilbert, Here we have the operator maturity model, and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application over time through different versions or all the way out to full auto pilot, where it's automatically scaling and tuning the application based on the current environment. And it's very cool. So coming over toothy open shift Consul, now we can actually see Dan has made the sequel server operator available to me and my team. That's the database that we're using. A sequel server. That's a great example. So cynics over running here in the cluster? But this is a great example for a developer. What if I want to create a new secret server instance? Sure, we're so it's as easy as provisioning any other service from the developer catalog. We come in and I can type for sequel server on what this is actually creating is, ah, native resource called Sequel Server, and you can think of that like a promise that a sequel server will get created. The operator is going to see that resource, install the application and then manage it over its life cycle, KAL, and from this install it operators view, I can see the operators running in my project and which resource is its managing Okay, but I'm >> kind of missing >> something here. I see this custom resource here, the sequel server. But where the community's resource is like pods. Yeah, I think it's cool that we get this native resource now called Sequel Server. But if I need to, I can still come in and see the native communities. Resource is like your staple set in service here. Okay, that is fantastic. Now, we did say earlier on, though, like many of our customers in the audience right now, you have a large team of engineers. Lost a large team of developers you gotta handle. You gotta have more than one secret server, right? We do one for every team as we're developing, and we use a lot of other technologies running on open shift as well, including Tomcat and our Jenkins pipelines and our dough js app that is gonna actually talk to that sequel server database. Okay, so this point we can kind of provisions, Some of these? Yes. Oh, since all of this is self service for me and my team's, I'm actually gonna go and create one of all of those things I just said on all of our projects, right Now, if you just give me a minute, Okay? Well, right. So basically, you're going to knock down No Jazz Jenkins sequel server. All right, now, that's like hundreds of bits of application level infrastructure right now. Live. So, Dan, are you not terrified? Well, I >> guess I should have done a little bit better >> job of managing guests this quota and historically just can. I might have had some conflict here because creating all these new applications would admit my team now had a massive back like tickets to work on. But now, because of software operators, my human operators were able to run our infrastructure at scale. So since I'm long into the cluster here as the cluster admin, I get this view of pods across all projects. And so I get an idea of what's happening across the entire cluster. And so I could see now we have four hundred ninety four pods already running, and there's a few more still starting up. And if I scroll to the list, we can see the different workloads Jessica just mentioned of Tomcats. And no Gs is And Jenkins is and and Siegel servers down here too, you know, I see continues >> creating and you have, like, close to five hundred pods running >> there. So, yeah, filters list down by secret server, so we could just see. Okay, But >> aren't you not >> running going around a cluster capacity at some point? >> Actually, yeah, we we definitely have a limited capacity in this cluster. And so, luckily, though, we already set up auto scale er's And so because the additional workload was launching, we see now those outer scholars have kicked in and some new machines are being created that don't yet have noticed. I'm because they're still starting up. And so there's another good view of this as well, so you can see machine sets. We have one machine set per availability zone, and you could see the each one is now scaling from ten to twelve machines. And the way they all those killers working is for each availability zone, they will. If capacities needed, they will add additional machines to that availability zone and then later effect fast. He's no longer needed. It will automatically take those machines away. >> That is incredible. So right now we're auto scaling across multiple available zones based on load. Okay, so looks like capacity planning and automation is fully, you know, handle this point. But I >> do have >> another question for year logged in. Is the cluster admin right now into the console? Can you show us your view of >> operator suffer operators? Actually, there's a couple of unique views here for operators, for Cluster admits. The first of those is operator Hub. This is where a cluster admin gets the ability to curate the experience of what operators are available to users of the cluster. And so obviously we already have the secret server operator installed, which which we've been using. The other unique view is operator management. This gives a cluster I've been the ability to maintain the operators they've already installed. And so if we dig in and see the secret server operator, well, see, we haven't set up for manual approval. And what that means is if a new update comes in for a single server, then a cluster and we would have the ability to approve or disapprove with that update before installs into the cluster, we'LL actually and there isn't upgrade that's available. Uh, I should probably wait to install this, though we're in the middle of scaling out this cluster. And I really don't want to disturb Jessica's application. Workflow. >> Yeah, so, actually, Dan, it's fine. My app is already up. It's running. Let me show it to you over here. So this is our products application that's talking to that sequel server instance. And for debugging purposes, we can see which version of sequel server we're currently talking to. Its two point two right now. And then which pod? Since this is a cluster, there's more than one secret server pod we could be connected to. Okay, I could see right there the bounder screeners they know to point to. That's the version we have right now. But, you know, >> this is kind of >> point of software operators at this point. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. Let's do it. Live here on stage. Right, then. All >> right. All right. I could see where this is going. So whenever you updated operator, it's just like any other resource on communities. And so the first thing that happens is the operator pot itself gets updated so we actually see a new version of the operator is currently being created now, and what's that gets created, the overseer will be terminated. And that point, the new, softer operator will notice. It's now responsible for managing lots of existing Siegel servers already in the environment. And so it's then going Teo update each of those sickle servers to match to the new version of the single server operator and so we could see it's running. And so if we switch now to the all projects view and we filter that list down by sequel server, then we should be able to see us. So lots of these sickle servers are now being created and the old ones are being terminated. So is the rolling update across the cluster? Exactly a So the secret server operator Deploy single server and an H A configuration. And it's on ly updates a single instance of secret server at a time, which means single server always left in nature configuration, and Jessica doesn't really have to worry about downtime with their applications. >> Yeah, that's awesome dance. So glad the team doesn't have to worry about >> that anymore and just got I think enough of these might have run by Now, if you try your app again might be updated. >> Let's see Jessica's application up here. All right. On laptop three. >> Here we go. >> Fantastic. And yet look, we're We're into two before we're onto three. Now we're on to victory. Excellent on. >> You know, I actually works so well. I don't even see a reason for us to leave this on manual approval. So I'm going to switch this automatic approval. And then in the future, if a new single server comes in, then we don't have to do anything, and it'll be all automatically updated on the cluster. >> That is absolutely fantastic. And so I was glad you guys got a chance to see that rolling update across the cluster. That is so cool. The Secret Service database being automated and fully updated. That is fantastic. Alright, so I can see how a software operator doesn't able. You don't manage hundreds if not thousands of applications. I know a lot of folks or interest in the back in infrastructure. Could you give us an example of the infrastructure >> behind this console? Yeah, absolutely. So we all know that open shift is designed that run in lots of different environments. But our teams think that as your redhead over, Schiff provides one of the best experiences by deeply integrating the open chief Resource is into the azure console, and it's even integrated into the azure command line toll and the easy open ship man. And, as was announced yesterday, it's now available for everyone to try out. And there's actually one more thing we wanted to show Everyone related to open shit, for this is all so new with a penchant for which is we now have multi cluster management. This gives you the ability to keep track of all your open shift environments, regardless of where they're running as well as you can create new clusters from here. And I'll dig into the azure cluster that we were just taking a look at. >> Okay, but is this user and face something have to install them one of my existing clusters? >> No, actually, this is the host of service that's provided by Red hat is part of cloud that redhead that calm and so all you have to do is log in with your red hair credentials to get access. >> That is incredible. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red update. Right and red embers. Thank Satan. Now we see it for multi cluster management. But home shift so you can fundamentally see. Now the suffer operators do finally change the game when it comes to making human operators vastly more productive and, more importantly, making Devon ops work more efficiently together than ever before. So we saw the rich ice vehicle system of those software operators. We can manage them across the Khyber Cloud with any, um, shift instance. And more importantly, I want to say Dan and Jessica for helping us with this demonstration. Okay, fantastic stuff, guys. Thank you so much. Let's get Paul back out here >> once again. Thanks >> so much to burn his team. Jessica and Dan. So you've just seen how open shift operators can help you manage hundreds, even thousands of applications. Install, upgrade, remove nodes, control everything about your application environment, virtual physical, all the way out to the cloud making, making things happen when the business demands it even at scale, because that's where it's going to get. Our next guest has lots of experience with demand at scale. and they're using open source container management to do it. Their work, their their their work building a successful cloud, First platform and there, the twenty nineteen Innovation Award winner. >> Please welcome twenty nineteen Innovation Award winner. Cole's senior vice president of technology, Rich Hodak. >> How you doing? Thanks. >> Thanks so much for coming out. We really appreciate it. So I guess you guys set some big goals, too. So can you baby tell us about the bold goal? Helped you personally help set for Cole's. And what inspired you to take that on? Yes. So it was twenty seventeen and life was pretty good. I had no gray hair and our business was, well, our tech was working well, and but we knew we'd have to do better into the future if we wanted to compete. Retails being disrupted. Our customers are asking for new experiences, So we set out on a goal to become an open hybrid cloud platform, and we chose Red had to partner with us on a lot of that. We set off on a three year journey. We're currently in Year two, and so far all KP eyes are on track, so it's been a great journey thus far. That's awesome. That's awesome. So So you Obviously, Obviously you think open source is the way to do cloud computing. So way absolutely agree with you on that point. So So what? What is it that's convinced you even more along? Yeah, So I think first and foremost wait, do we have a lot of traditional IAS fees? But we found that the open source partners actually are outpacing them with innovation. So I think that's where it starts for us. Um, secondly, we think there's maybe some financial upside to going more open source. We think we can maybe take some cost out unwind from these big fellas were in and thirdly, a CZ. We go to universities. We started hearing. Is we interviewed? Hey, what is Cole's doing with open source and way? Wanted to use that as a lever to help recruit talent. So I'm kind of excited, you know, we partner with Red Hat on open shift in in Rail and Gloucester and active M Q and answerable and lots of things. But we've also now launched our first open source projects. So it's really great to see this journey. We've been on. That's awesome, Rich. So you're in. You're in a high touch beta with with open shift for So what? What features and components or capabilities are you most excited about and looking forward to what? The launch and you know, and what? You know what? What are the something maybe some new goals that you might be able to accomplish with with the new features. And yeah, So I will tell you we're off to a great start with open shift. We've been on the platform for over a year now. We want an innovation award. We have this great team of engineers out here that have done some outstanding work. But certainly there's room to continue to mature that platform. It calls, and we're excited about open shift, for I think there's probably three things that were really looking forward to. One is we're looking forward to, ah, better upgrade process. And I think we saw, you know, some of that in the last demo. So upgrades have been kind of painful up until now. So we think that that that will help us. Um, number two, A lot of our open shift workloads today or the workloads. We run an open shifts are the stateless apse. Right? And we're really looking forward to moving more of our state full lapse into the platform. And then thirdly, I think that we've done a great job of automating a lot of the day. One stuff, you know, the provisioning of, of things. There's great opportunity o out there to do mohr automation for day two things. So to integrate mohr with our messaging systems in our database systems and so forth. So we, uh we're excited. Teo, get on board with the version for wear too. So, you know, I hope you, Khun, we can help you get to the next goals and we're going to continue to do that. Thank you. Thank you so much rich, you know, all the way from from rail toe open shift. It's really exciting for us, frankly, to see our products helping you solve World War were problems. What's you know what? Which is. Really? Why way do this and and getting into both of our goals. So thank you. Thank you very much. And thanks for your support. We really appreciate it. Thanks. It has all been amazing so far and we're not done. A critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure. We've been helping our customers do that in these environments. For almost twenty years now, we've been running the most complex work loads in the world. But you know, while the public cloud has opened up tremendous possibilities, it also brings in another type of another layer of infrastructure complexity. So what's our next goal? Extend your extend your data center all the way to the edge while being as effective as you have been over the last twenty twenty years, when it's all at your own fingertips. First from a practical sense, Enterprises air going to have to have their own data centers in their own environment for a very long time. But there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge. In fact, we talked about that very early on how technology advances in computer networking is storage are changing the physical boundaries of the data center every single day. The need, the need to process data at the source is becoming more and more critical. New use cases Air coming up every day. Self driving cars need to make the decisions on the fly. In the car factory processes are using a I need to adapt in real time. The factory floor has become the new edge of the data center, working with things like video analysis of a of A car's paint job as it comes off the line, where a massive amount of data is on ly needed for seconds in order to make critical decisions in real time. If we had to wait for the video to go up to the cloud and back, it would be too late. The damage would have already been done. The enterprise is being stretched to be able to process on site, whether it's in a car, a factory, a store or in eight or nine PM, usually involving massive amounts of data that just can't easily be moved. Just like these use cases couldn't be solved in private cloud alone because of things like blatant see on data movement, toe address, real time and requirements. They also can't be solved in public cloud alone. This is why open hybrid is really the model that's needed in the only model forward. So how do you address this class of workload that requires all of the above running at the edge? With the latest technology all its scale, let me give you a bit of a preview of what we're working on. We are taking our open hybrid cloud technologies to the edge, Integrated with integrated with Aro AM Hardware Partners. This is a preview of a solution that will contain red had open shift self storage in K V M virtual ization with Red Hat Enterprise Lennox at the core, all running on pre configured hardware. The first hardware out of the out of the gate will be with our long time. Oh, am partner Del Technologies. So let's bring back burn the team to see what's right around the corner. >> Please welcome back to the stage. Red Hat. Global director of developer Experience burst Sutter with Kareema Sharma. Okay, We just how was your Foreign operators have redefined the capabilities and usability of the open hybrid cloud, and now we're going to show you a few more things. Okay, so just be ready for that. But I know many of our customers in this audience right now, as well as the customers who aren't even here today. You're running tens of thousands of applications on open chef clusters. We know that disappearing right now, but we also know that >> you're not >> actually in the business of running terminators clusters. You're in the business of oil and gas from the business retail. You're in a business transportation, you're in some other business and you don't really want to manage those things at all. We also know though you have lo latest requirements like Polish is talking about. And you also dated gravity concerns where you >> need to keep >> that on your premises. So what you're about to see right now in this demonstration is where we've taken open ship for and made a bare metal cluster right here on this stage. This is a fully automated platform. There is no underlying hyper visor below this platform. It's open ship running on bare metal. And this is your crew vanities. Native infrastructure, where we brought together via mes containers networking and storage with me right now is green mush arma. She's one of her engineering leaders responsible for infrastructure technologies. Please welcome to the stage, Karima. >> Thank you. My pleasure to be here, whether it had summit. So let's start a cloud. Rid her dot com and here we can see the classroom Dannon Jessica working on just a few moments ago From here we have a bird's eye view ofthe all of our open ship plasters across the hybrid cloud from multiple cloud providers to on premises and noticed the spare medal last year. Well, that's the one that my team built right here on this stage. So let's go ahead and open the admin console for that last year. Now, in this demo, we'LL take a look at three things. A multi plaster inventory for the open Harbor cloud at cloud redhead dot com. Second open shift container storage, providing convert storage for virtual machines and containers and the same functionality for cloud vert and bare metal. And third, everything we see here is scuba unit is native, so by plugging directly into communities, orchestration begin common storage. Let working on monitoring facilities now. Last year, we saw how continue native actualization and Q Bert allow you to run virtual machines on Cabinet is an open shift, allowing for a single converge platform to manage both containers and virtual machines. So here I have this dark net project now from last year behead of induced virtual machine running it S P darknet application, and we had started to modernize and continue. Arise it by moving. Parts of the application from the windows began to the next containers. So let's take a look at it here. I have it again. >> Oh, large shirt, you windows. Earlier on, I was playing this game back stage, so it's just playing a little solitaire. Sorry about that. >> So we don't really have time for that right now. Birds. But as I was saying, Over here, I have Visions Studio Now the window's virtual machine is just another container and open shift and the i d be service for the virtual machine. It's just another service in open shift open shifts. Running both containers and virtual machines together opens a whole new world of possibilities. But why stop there? So this here be broadened to come in. It is native infrastructure as our vision to redefine the operation's off on premises infrastructure, and this applies to all matters of workloads. Using open shift on metal running all the way from the data center to the edge. No by your desk, right to main benefits. Want to help reduce the operation casts And second, to help bring advance good when it is orchestration concept to your infrastructure. So next, let's take a look at storage. So open shift container storage is software defined storage, providing the same functionality for both the public and the private lads. By leveraging the operator framework, open shift container storage automatically detects the available hardware configuration to utilize the discs in the most optimal vein. So then adding my note, you don't have to think about how to balance the storage. Storage is just another service running an open shift. >> And I really love this dashboard quite honestly, because I love seeing all the storage right here. So I'm kind of curious, though. Karima. What kind of storage would you What, What kind of applications would you use with the storage? >> Yeah, so this is the persistent storage. To be used by a database is your files and any data from applications such as a Magic Africa. Now the A Patrick after operator uses school, been at this for scheduling and high availability, and it uses open shift containers. Shortest. Restore the messages now Here are on premises. System is running a caf co workload streaming sensor data on DH. We want toe sort it and act on it locally, right In a minute. A place where maybe we need low latency or maybe in a data lake like situation. So we don't want to send the starter to the cloud. Instead, we want to act on it locally, right? Let's look at the griffon a dashboard and see how our system is doing so with the incoming message rate of about four hundred messages for second, the system seems to be performing well, right? I want to emphasize this is a fully integrated system. We're doing the testing An optimization sze so that the system can Artoo tune itself based on the applications. >> Okay, I love the automated operations. Now I am a curious because I know other folks in the audience want to know this too. What? Can you tell us more about how there's truly integrated communities can give us an example of that? >> Yes. Again, You know, I want to emphasize everything here is managed poorly by communities on open shift. Right. So you can really use the latest coolest to manage them. All right. Next, let's take a look at how easy it is to use K native with azure functions to script alive Reaction to a live migration event. >> Okay, Native is a great example. If actually were part of my breakout session yesterday, you saw me demonstrate came native. And actually, if you want to get hands on with it tonight, you can come to our guru night at five PM and actually get hands on like a native. So I really have enjoyed using K. Dated myself as a software developer. And but I am curious about the azure functions component. >> Yeah, so as your functions is a function is a service engine developed by Microsoft fully open source, and it runs on top of communities. So it works really well with our on premises open shift here. Right now, I have a simple azure function that I already have here and this azure function, you know, Let's see if this will send out a tweet every time we live My greater Windows virtual machine. Right. So I have it integrated with open shift on DH. Let's move a note to maintenance to see what happens. So >> basically has that via moves. We're going to see the event triggered. They trigger the function. >> Yeah, important point I want to make again here. Windows virtue in machines are equal citizens inside of open shift. We're investing heavily in automation through the use of the operator framework and also providing integration with the hardware. Right, So next, Now let's move that note to maintain it. >> But let's be very clear here. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. This is open ship running on bear. Meddle with these bare metal host. >> That is absolutely right. The system can automatically discover the bare metal hosts. All right, so here, let's move this note to maintenance. So I start them Internets now. But what will happen at this point is storage will heal itself, and communities will bring back the same level of service for the CAFTA application by launching a part on another note and the virtual machine belive my great right and this will create communities events. So we can see. You know, the events in the event stream changes have started to happen. And as a result of this migration, the key native function will send out a tweet to confirm that could win. It is native infrastructure has indeed done the migration for the live Ian. Right? >> See the events rolling through right there? >> Yeah. All right. And if we go to Twitter? >> All right, we got tweets. Fantastic. >> And here we can see the source Nord report. Migration has succeeded. It's a pretty cool stuff right here. No. So we want to bring you a cloud like experience, but this means is we're making operational ease a fuse as a top goal. We're investing heavily in encapsulating management knowledge and working to pre certify hardware configuration in working with their partners such as Dell, and they're dead already. Note program so that we can provide you guidance on specific benchmarks for specific work loads on our auto tuning system. >> All right, well, this is tow. I know right now, you're right thing, and I want to jump on the stage and check out the spare metal cluster. But you should not right. Wait After the keynote didn't. Come on, check it out. But also, I want you to go out there and think about visiting our partner Del and their booth where they have one. These clusters also. Okay, So this is where vmc networking and containers the storage all come together And a Kurban in his native infrastructure. You've seen right here on this stage, but an agreement. You have a bit more. >> Yes. So this is literally the cloud coming down from the heavens to us. >> Okay? Right here, Right now. >> Right here, right now. So, to close the loop, you can have your plaster connected to cloud redhead dot com for our insights inside reliability engineering services so that we can proactively provide you with the guidance through automated analyses of telemetry in logs and help flag a problem even before you notice you have it Beat software, hardware, performance, our security. And one more thing. I want to congratulate the engineers behind the school technology. >> Absolutely. There's a lot of engineers here that worked on this cluster and worked on the stack. Absolutely. Thank you. Really awesome stuff. And again do go check out our partner Dale. They're just out that door I can see them from here. They have one. These clusters get a chance to talk to them about how to run your open shift for on a bare metal cluster as well. Right, Kareema, Thank you so much. That was totally awesome. We're at a time, and we got to turn this back over to Paul. >> Thank you. Right. >> Okay. Okay. Thanks >> again. Burned, Kareema. Awesome. You know, So even with all the exciting capabilities that you're seeing, I want to take a moment to go back to the to the first platform tenant that we learned with rail, that the platform has to be developer friendly. Our next guest knows something about connecting a technology like open shift to their developers and part of their company. Wide transformation and their ability to shift the business that helped them helped them make take advantage of the innovation. Their Innovation award winner this year. Please, Let's welcome Ed to the stage. >> Please welcome. Twenty nineteen. Innovation Award winner. BP Vice President, Digital transformation. Ed Alford. >> Thanks, Ed. How your fake Good. So was full. Get right into it. What we go you guys trying to accomplish at BP and and How is the goal really important in mandatory within your organization? Support on everyone else were global energy >> business, with operations and over seventy countries. Andi. We've embraced what we call the jewel challenge, which is increasing the mind for energy that we have as individuals in the world. But we need to produce the energy with fuel emissions. It's part of that. One of our strategic priorities that we >> have is to modernize the whole group on. That means simplifying our processes and enhancing >> productivity through digital solutions. So we're using chlo based technologies >> on, more importantly, open source technologies to clear a community and say, the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the jewel challenge and actually try and help solve that problem. That's great. So So how did these heart of these new ways of working benefit your team and really the entire organ, maybe even the company as a whole? So we've been given the Innovation Award for Digital conveyor both in the way it was created and also in water is delivering a couple of guys in the audience poll costal and brewskies as he they they're in the team. Their teams developed that convey here, using our jail and Dev ops and some things. We talk about this stuff a lot, but actually the they did it in a truly our jail and develops we, um that enabled them to experiment and walking with different ways. And highlight in the skill set is that we, as a group required in order to transform using these approaches, we can no move things from ideation to scale and weeks and days sometimes rather than months. Andi, I think that if we can take what they've done on DH, use more open source technology, we contain that technology and apply across the whole group to tackle this Jill challenge. And I think that we use technologists and it's really cool. I think that we can no use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way. So So what's the next step for you guys at BP? So moving forward, we we are embracing ourselves, bracing a clothed, forced organization. We need to continue to live to deliver on our strategy, build >> over the technology across the entire group to address the jewel >> challenge and continue to make some of these bold changes and actually get into and really use. Our technology is, I said, too addresses you'LL challenge and make the future of our planet a better place for ourselves and our children and our children's children. That's that's a big goal. But thank you so much, Ed. Thanks for your support. And thanks for coming today. Thank you very much. Thank you. Now comes the part that, frankly, I think his best part of the best part of this presentation We're going to meet the type of person that makes all of these things a reality. This tip this type of person typically works for one of our customers or with one of with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about Maura the way more in the >> week. I think the thing I like most about it is you feel that reward Just helping people I mean and helping people with stuff you enjoy right with computers. My dad was the math and science teacher at the local high school. And so in the early eighties, that kind of met here, the default person. So he's always bringing in a computer stuff, and I started a pretty young age. What Jason's been able to do here is Mohr evangelize a lot of the technologies between different teams. I think a lot of it comes from the training and his certifications that he's got. He's always concerned about their experience, how easy it is for them to get applications written, how easy it is for them to get them up and running at the end of the day. We're a loan company, you know. That's way we lean on accounting like red. That's where we get our support front. That's why we decided to go with a product like open shift. I really, really like to product. So I went down. The certification are out in the training ground to learn more about open shit itself. So my daughter's teacher, they were doing a day of coding, and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class. The people that we have on our teams, like Jason, are what make us better than our competitors, right? Anybody could buy something off the shelf. It's people like him. They're able to take that and mold it into something that then it is a great offering for our partners and for >> customers. Please welcome Red Hat Certified Professional of the Year Jason Hyatt. >> Jason, Congratulations. Congratulations. What a what a big day, huh? What a really big day. You know, it's great. It's great to see such work, You know that you've done here. But you know what's really great and shows out in your video It's really especially rewarding. Tow us. And I'm sure to you as well to see how skills can open doors for for one for young women, like your daughters who already loves technology. So I'd liketo I'd like to present this to you right now. Take congratulations. Congratulations. Good. And we I know you're going to bring this passion. I know you bring this in, everything you do. So >> it's this Congratulations again. Thanks, Paul. It's been really exciting, and I was really excited to bring my family here to show the experience. It's it's >> really great. It's really great to see him all here as well going. Maybe we could you could You guys could stand up. So before we leave before we leave the stage, you know, I just wanted to ask, What's the most important skill that you'LL pass on from all your training to the future generations? >> So I think the most important thing is you have to be a continuous learner you can't really settle for. Ah, you can't be comfortable on learning, which I already know. You have to really drive a continuous Lerner. And of course, you got to use the I ninety. Maxwell. Quite. >> I don't even have to ask you the question. Of course. Right. Of course. That's awesome. That's awesome. And thank you. Thank you for everything, for everything that you're doing. So thanks again. Thank you. You know what makes open source work is passion and people that apply those considerable talents that passion like Jason here to making it worked and to contribute their idea there. There's back. And believe me, it's really an impressive group of people. You know you're family and especially Berkeley in the video. I hope you know that the redhead, the certified of the year is the best of the best. The cream of the crop and your dad is the best of the best of that. So you should be very, very happy for that. I also and I also can't wait. Teo, I also can't wait to come back here on this stage ten years from now and present that same award to you. Berkeley. So great. You should be proud. You know, everything you've heard about today is just a small representation of what's ahead of us. We've had us. We've had a set of goals and realize some bold goals over the last number of years that have gotten us to where we are today. Just to recap those bold goals First bait build a company based solely on open source software. It seems so logical now, but it had never been done before. Next building the operating system of the future that's going to run in power. The enterprise making the standard base platform in the op in the Enterprise Olympics based operating system. And after that making hybrid cloud the architecture of the future make hybrid the new data center, all leading to the largest software acquisition in history. Think about it around us around a company with one hundred percent open source DNA without. Throughout. Despite all the fun we encountered over those last seventeen years, I have to ask, Is there really any question that open source has won? Realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day in the Red Hat was born. But we only got to that goal because of you. Many of you contributors, many of you knew toe open source software and willing to take the risk along side of us and many of partners on that journey, both inside and outside of Red Hat. Going forward with the reach of IBM, Red hat will accelerate. Even Mohr. This will bring open source general innovation to the next generation hybrid data center, continuing on our original mission and goal to bring open source technology toe every corner of the planet. What I what I just went through in the last hour Soul, while mind boggling to many of us in the room who have had a front row seat to this overto last seventeen plus years has only been red hats. First step. Think about it. We have brought open source development from a niche player to the dominant development model in software and beyond. Open Source is now the cornerstone of the multi billion dollar enterprise software world and even the next generation hybrid act. Architecture would not even be possible without Lennox at the core in the open innovation that it feeds to build around it. This is not just a step forward for software. It's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined. We have. We have witnessed open source accomplished in the last seventeen years more than what most people will see in their career. Or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future. And in the one last thing to say, it's everybody in this room and beyond. Everyone outside continue the mission. Thanks have a great sum. It's great to see it
SUMMARY :
Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Kennedy setting the gold to the American people to go to the moon. that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. So it is an honor for me to be able to show it to you live on stage today. And we're not about the clinic's eight. And Morgan, There's windows. That means that for the first time, you can log in from any device Because that's the standard Lennox off site. I love the dashboard overview of the system, You see the load of the system, some some of its properties. So what about if I have to add a whole new application to this environment? Which the way for you to install different versions of your half stack that That is fantastic and the application streams Want to keep up with the fast moving ecosystems off programming I know some people were thinking it right now. everyone you want two or three or whichever your application needs. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, I've opened the storage space for you right here, where you see an overview of your storage. you know, we'll have another question for you. you know a lot of people, including me and people in the audience like that dark out right? much easier, including a post gra seeker and, of course, the python that we saw right there. Yeah, absolutely. And it's saved so that you don't actually have to know all the various incantations from Amazon I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily so you can really hit your Clyburn hybrid cloud operating system images. and I just need a few moments for it to build. So while that's taking a few moments, I know there's another key question in the minds of the audience right now, You see all my relate machines here, including the one I showed you what Consul on before. Okay, okay, so now it's progressing. it's progressing. live upgrade on stage. Detective that and you know, it doesn't run the Afghan cause we don't support operating that. So the good news is, we were protected from possible failed upgrade there, That's the idea. And I really love what you showed us there. So you were away for so long. So the really cool thing about this bird is that all of these images were built So thank you so much for that large. more to talk to you about. I'm going to show you here a satellite inventory and his So he's all the machines can get updated in one fell swoop. And there's one thing that I want to bring your attention to today because it's brand new. I know that in the minds of the audience right now. I've actually been waiting for a while patiently for you to get to the really good stuff. there's one more thing that I wanted to let folks know about. next eight and some features that we have there. So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate OK, so we basically have this new feature. So And this is this list is growing every single day, so customers can actually opt in to the rules that are most But it comes to CVS and things that nature. This is the satellite that we saw before, and I'll grab one of the hosts and I love it so it's just a single command and you're ready to register this box right now. I'm going to show you one more thing. I know everyone's waiting for it as well, But hey, you're VM is ready. Yeah, insights is a really cool feature And I've got it in all my images already. the machines registering on cloud that redhead dot com ready to be managed. OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, We saw that in the overview, and I can actually go and get some more details about what this everybody to go try this like, we really need to get this thing going and try it out right now. don't know, sent about the room just yet. And even though it's really easy to get going on and we kind of, you know, when a little bit sideways here moments. I went brilliant. We hear about that all the time, as I just told Please welcome Lawrence Livermore National Laboratory. And thank thank you so much for coming for But first and foremost, our job is to ensure the safety, and for the geeks in the audience, I think there's a few of them out there. before And you know, Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified open source, you know, for even open source existing. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo all the way to the extract excess Excuse scale supercomputing. share any more details about that system right now, but we are hoping that we're going to be able of the data center spread across so many multiple environments, management had to be I know all of you have heard we're talking to pretend to new customers about the travel out. Earlier we showed you read Enterprise Clinic St running on lots of In large part, that's because open shit for has extended management of the clusters down to the infrastructure, you can now see the machines that make up the cluster where machine represents the infrastructure. Thes software operators are responsible for aligning the cluster to a desired state. of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us has made the sequel server operator available to me and my team. Okay, so this point we can kind of provisions, And if I scroll to the list, we can see the different workloads Jessica just mentioned Okay, But And the way they all those killers working is Okay, so looks like capacity planning and automation is fully, you know, handle this point. Is the cluster admin right now into the console? This gives a cluster I've been the ability to maintain the operators they've already installed. So this is our products application that's talking to that sequel server instance. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. And that point, the new, softer operator will notice. So glad the team doesn't have to worry about that anymore and just got I think enough of these might have run by Now, if you try your app again Let's see Jessica's application up here. And yet look, we're We're into two before we're onto three. So I'm going to switch this automatic approval. And so I was glad you guys got a chance to see that rolling update across the cluster. And I'll dig into the azure cluster that we were just taking a look at. all you have to do is log in with your red hair credentials to get access. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red Thanks so much to burn his team. of technology, Rich Hodak. How you doing? center all the way to the edge while being as effective as you have been over of the open hybrid cloud, and now we're going to show you a few more things. You're in the business of oil and gas from the business retail. And this is your crew vanities. Well, that's the one that my team built right here on this stage. Oh, large shirt, you windows. open shift container storage automatically detects the available hardware configuration to What kind of storage would you What, What kind of applications would you use with the storage? four hundred messages for second, the system seems to be performing well, right? Now I am a curious because I know other folks in the audience want to know this too. So you can really use the latest coolest to manage And but I am curious about the azure functions component. and this azure function, you know, Let's see if this will We're going to see the event triggered. So next, Now let's move that note to maintain it. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. You know, the events in the event stream changes have started to happen. And if we go to Twitter? All right, we got tweets. No. So we want to bring you a cloud like experience, but this means is I want you to go out there and think about visiting our partner Del and their booth where they have one. Right here, Right now. So, to close the loop, you can have your plaster connected to cloud redhead These clusters get a chance to talk to them about how to run your open shift for on a bare metal Thank you. rail, that the platform has to be developer friendly. Please welcome. What we go you guys trying to accomplish at BP and and How is the goal One of our strategic priorities that we have is to modernize the whole group on. So we're using chlo based technologies And highlight in the skill part of this presentation We're going to meet the type of person that makes And so in the early eighties, welcome Red Hat Certified Professional of the Year Jason Hyatt. So I'd liketo I'd like to present this to you right now. to bring my family here to show the experience. before we leave before we leave the stage, you know, I just wanted to ask, What's the most important So I think the most important thing is you have to be a continuous learner you can't really settle for. And in the one last thing to say, it's everybody in this room and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam Ball | PERSON | 0.99+ |
Jessica | PERSON | 0.99+ |
Josh Boyer | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Timothy Kramer | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Lars Carl | PERSON | 0.99+ |
Kareema Sharma | PERSON | 0.99+ |
Wilbert | PERSON | 0.99+ |
Jason Hyatt | PERSON | 0.99+ |
Brent | PERSON | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
Rich Hodak | PERSON | 0.99+ |
Ed Alford | PERSON | 0.99+ |
ten | QUANTITY | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Daniel McPherson | PERSON | 0.99+ |
Jessica Forrester | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
Lars | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Robin | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Karima | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
seventy pounds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John F. Kennedy | PERSON | 0.99+ |
Ansel | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Edward Teller | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Teo | PERSON | 0.99+ |
Kareema | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Python | TITLE | 0.99+ |
seven individuals | QUANTITY | 0.99+ |
BP | ORGANIZATION | 0.99+ |
ten ten thousand times | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Chris | PERSON | 0.99+ |
Del Technologies | ORGANIZATION | 0.99+ |
python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
theCUBE Insights | Red Hat Summit 2019
>> Announcer: Live from Boston, Massachusetts, it's theCUBE, covering Red Hat Summit 2019. Brought to you by Red Hat. >> Welcome back here on theCUBE, joined by Stu Miniman, I'm John Walls, as we wrap up our coverage here of the Red Hat Summit here in 2019. We've been here in Boston all week, three days, Stu, of really fascinating programming on one hand, the keynotes showing quite a diverse ecosystem that Red Hat has certainly built, and we've seen that array of guests reflected as well here, on theCUBE. And you leave with a pretty distinct impression about the vast reach, you might say, of Red Hat, and how they diversified their offerings and their services. >> Yeah, so, John, as we've talked about, this is the sixth year we've had theCUBE here. It's my fifth year doing it and I'll be honest, I've worked with Red Hat for 19 years, but the first year I came, it was like, all right, you know, I know lots of Linux people, I've worked with Linux people, but, you know, I'm not in there in the terminal and doing all this stuff, so it took me a little while to get used to. Today, I know not only a lot more people in Red Hat and the ecosystem, but where the ecosystem is matured and where the portfolio is grown. There's been some acquisitions on the Red Hat side. There's a certain pending acquisition that is kind of a big deal that we talked about this week. But Red Hat's position in this IT marketplace, especially in the hybrid and multi-cloud world, has been fun to watch and really enjoyed digging in it with you this week and, John Walls, I'll turn the camera to you because- >> I don't like this. (laughing) >> It was your first time on the program. Yeah, you know- >> I like asking you the questions. >> But we have to do this, you know, three days of Walls to Miniman coverage. So let's get the Walls perspective. >> John: All right. >> On your take. You've been to many shows. >> John: Yeah, no, I think that what's interesting about what I've seen here at Red Hat is this willingness to adapt to the marketplace, at least that's the impression I got, is that there are a lot of command and control models about this is the way it's going to be, and this is what we're going to give you, and you're gonna have to take it and like it. And Red Hat's just on the other end of that spectrum, right? It's very much a company that's built on an open source philosophy. And it's been more of what has the marketplace wanted? What have you needed? And now how can we work with you to build it and make it functional? And now we're gonna just offer it to a lot of people, and we're gonna make a lot of money doing that. And so, I think to me, that's at least what I got talking to Jim Whitehurst, you know about his philosophy and where he's taken this company, and has made it obviously a very attractive entity, IBM certainly thinks so to the tune of 34 billion. But you see that. >> Yeah, it's, you know, some companies say, oh well, you know, it's the leadership from the top. Well, Jim's philosophy though, it is The Open Organization. Highly recommend the book, it was a great read. We've talked to him about the program, but very much it's 12, 13 thousand people at the company. They're very much opinionated, they go in there, they have discussions. It's not like, well okay, one person pass this down. It's we're gonna debate and argue and fight. Doesn't mean we come to a full consensus, but open source at the core is what they do, and therefore, the community drives a lot of it. They contribute it all back up-stream, but, you know, we know what Red Hat's doing. It's fascinating to talk to Jim about, yeah you know, on the days where I'm thinking half glass empty, it's, you know, wow, we're not yet quite four billion dollars of the company, and look what an impact they had. They did a study with IDC and said, ten trillion dollars of the economy that they touch through RHEL, but on the half empty, on the half full days, they're having a huge impact outside. He said 34 billion dollars that IBM's paying is actually a bargain- >> It's a great deal! (laughing) >> for where they're going. But big announcements. RHEL 8, which had been almost five years in the works there. Some good advancements there. But the highlight for me this week really was OpenShift. We've been watching OpenShift since the early days, really pre-Kubernetes. It had a good vision and gained adoption in the marketplace, and was the open source choice for what we called Paths back then. But, when Kubernetes came around, it really helped solidify where OpenShift was going. It is the delivery mechanism for containerization and that container cluster management and Red Hat has a leadership position in that space. I think that almost every customer that we talked to this week, John, OpenShift was the underpinning. >> John: Absolutely. >> You would expect that RHEL's underneath there, but OpenShift as the lever for digital transformation. And that was something that I really enjoyed talking to. DBS Bank from Singapore, and Delta, and UPS. It was, we talked about their actual transformation journeys, both the technology and the organizational standpoint, and OpenShift really was the lever to give them that push. >> You know, another thing, I know you've been looking at this and watching this for many many years. There's certainly the evolution of open source, but we talked to Chris Wright earlier, and he was talking about the pace of change and how it really is incremental. And yet, if you're on the outside looking in, and you think, gosh, technology is just changing so fast, it's so crazy, it's so disruptive, but to hear it from Chris, not so. You don't go A to Z, you go A to B to C to D to D point one. (laughing) It takes time. And there's a patience almost and a cadence that has this slow revolution that I'm a little surprised at. I sense they, or got a sense of, you know, a much more rapid change of pace and that's not how the people on the inside see it. >> Yeah. Couple of comment back at that. Number one is we know how much rapid change there is going because if you looked at the Linux kernel or what's happening with Kubernetes and the open source, there's so much change going on there. There's the data point thrown out there that, you know, I forget, that 75% or 95% of all the data in the world was created in the last two years. Yet, only 2% of that is really usable and searchable and things like that. That's a lot of change. And the code base of Linux in the last two years, a third of the code is completely overhauled. This is technology that has been around for decades. But if you look at it, if you think about a company, one of the challenges that we had is if they're making those incremental change, and slowly looking at them, a lot of people from the outside would be like, oh, Red Hat, yeah that's that little Linux company, you know, that I'm familiar with and it runs on lots of places there. When we came in six years ago, there was a big push by Red Hat to say, "We're much more than Linux." They have their three pillars that we spent a lot of time through from the infrastructure layer to the cloud native to automation and management. Lots of shows I go to, AnsiballZ all over the place. We talked about OpenShift 4 is something that seems to be resonating. Red Hat takes a leadership position, not just in the communities and the foundations, but working with their customers to be a more trusted and deeper partner in what they're doing with digital transformation. There might have been little changes, but, you know, this is not the Red Hat that people would think of two years or five years ago because a large percentage of Red Hat has changed. One last nugget from Chris Wright there, is, you know, he spent a lot of time talking about AI. And some of these companies go buzzwords in these environments, but, you know, but he hit a nice cogent message with the punchline is machines enhance human intelligence because these are really complex systems, distributed architectures, and we know that the people just can't keep up with all of the change, and the scope, and the scale that they need to handle. So software should be able to be helping me get my arms around it, as well as where it can automate and even take actions, as long as we're careful about how we do it. >> John: Sure. There's another, point at least, I want to pick your brain about, is really the power of presence. The fact that we have the Microsoft CEO on the stage. Everybody thought, well (mumbles) But we heard it from guest after guest after guest this week, saying how cool was that? How impressive was that? How monumental was that? And, you know, it's great to have that kind of opportunity, but the power of Nadella's presence here, it's unmistakable in the message that has sent to this community. >> Yeah, you know, John, you could probably do a case study talking about culture and the power of culture because, I talked about Red Hat's not the Red Hat that you know. Well, the Satya Nadella led Microsoft is a very different Microsoft than before he was on board. Not only are they making great strides in, you know, we talk about SaaS and public cloud and the like, but from a partnership standpoint, Microsoft of old, you know, Linux and Red Hat were the enemy and you know, Windows was the solution and they were gonna bake everything into it. Well, Microsoft partnered with many more companies. Partnerships and ecosystem, a key message this week. We talked about Microsoft with Red Hat, but, you know, announcement today was, surprised me a little bit, but when we think about it, not too much. OpenShift supported on VMware environments, so, you know, VMware has in that family of Dell, there's competitive solutions against OpenShift and, you know, so, and virtualization. You know, Red Hat has, you know, RHV, the Red Hat Virtualization. >> John: Right, right, right. >> The old day of the lines in the swim lanes, as one of our guests talked about, really are there. Customers are living in a heterogeneous, multi-cloud world and the customers are gonna go and say, "You need to work together, before you're not gonna be there." >> Azure. Right, also we have Azure compatibility going on here. >> Stu: Yeah, deep, not just some tested, but deep integration. I can go to Azure and buy OpenShift. I mean that, the, to say it's in the, you know, not just in the marketplace, but a deep integration. And yeah, there was a little poke, if our audience caught it, from Paul Cormier. And said, you know, Microsoft really understands enterprise. That's why they're working tightly with us. Uh, there's a certain other large cloud provider that created Kubernetes, that has their own solution, that maybe doesn't understand enterprise as much and aren't working as closely with Red Hat as they might. So we'll see what response there is from them out there. Always, you know, we always love on theCUBE to, you know, the horse is on the track and where they're racing, but, you know, more and more all of our worlds are cross-pollinating. You know, the AI and AI Ops stuff. The software ecosystems because software does have this unifying factor that the API economy, and having all these things work together, more and more. If you don't, customers will go look for solutions that do provide the full end to end solution stuff they're looking for. >> All right, so we're, I've got a couple in mind as far as guests we've had on the show. And we saw them in action on the keynotes stage too. Anybody that jumps out at you, just like, wow, that was cool, that was, not that we, we love all of our children, right? (laughing) But every once in awhile, there's a story or two that does stand out. >> Yeah, so, it is so tough, you know. I loved, you know, the stories. John, I'm sure I'm going to ask you, you know, Mr. B and what he's doing with the children. >> John: Right, Franklin Middle School. >> And the hospitals with Dr. Ellen and the end of the brains. You know, those tech for good are phenomenal. For me, you know, the CIOs that we had on our first day of program. Delta was great and going through transformation, but, you know, our first guest that we had on, was DBS Bank in Singapore and- >> John: David Gledhill. >> He was so articulate and has such a good story about, I took outsourced environments. I didn't just bring it into my environment, say okay, IT can do it a little bit better, and I'll respond to business. No, no, we're going to total restructure the company. Not we're a software company. We're a technology company, and we're gonna learn from the Googles of the world and the like. And he said, We want to be considered there, you know, what was his term there? It was like, you know, bank less, uh, live more and bank less. I mean, what- >> Joyful banking, that was another of his. >> Joyful banking. You don't think of a financial institution as, you know, we want you to think less of the bank. You know, that's just a powerful statement. Total reorganization and, as we mentioned, of course, OpenShift, one of those levers underneath helping them to do that. >> Yeah, you mentioned Dr. Ellen Grant, Boston Children's Hospital, I think about that. She's in fetal neuroimaging and a Professor of Radiology at Harvard Medical School. The work they're doing in terms of diagnostics through imaging is spectacular. I thought about Robin Goldstone at the Livermore Laboratory, about our nuclear weapon monitoring and efficacy of our monitoring. >> Lawrence Livermore. So good. And John, talk about the diversity of our guests. We had expats from four different countries, phenomenal accents. A wonderful slate of brilliant women on the program. From the customer side, some of the award winners that you interviewed. The executives on the program. You know, Stefanie Chiras, always great, and Denise who were up on the keynotes stage. Denise with her 3D printed, new Red Hat logo earrings. Yeah, it was an, um- >> And a couple of old Yanks (laughing). Well, I enjoyed it, Stu. As always, great working with you, and we thank you for being with us as well. For now, we're gonna say so long. We're gonna see you at the next Red Hat Summit, I'm sure, 2020 in San Francisco. Might be a, I guess a slightly different company, but it might be the same old Red Hat too, but they're going to have 34 billion dollars behind them at that point and probably riding pretty high. That will do it for our CUBE coverage here from Boston. Thanks for much for joining us. For Stu Miniman, and our entire crew, have a good day. (funky music)
SUMMARY :
Brought to you by Red Hat. about the vast reach, you might say, of Red Hat, but the first year I came, it was like, all right, you know, I don't like this. Yeah, you know- But we have to do this, you know, You've been to many shows. And Red Hat's just on the other end of that spectrum, right? It's fascinating to talk to Jim about, yeah you know, and Red Hat has a leadership position in that space. and OpenShift really was the lever to give them that push. I sense they, or got a sense of, you know, and the scale that they need to handle. And, you know, it's great to have that kind of opportunity, I talked about Red Hat's not the Red Hat that you know. The old day of the lines in the swim lanes, Right, also we have Azure compatibility going on here. I mean that, the, to say it's in the, you know, And we saw them in action on the keynotes stage too. I loved, you know, the stories. and the end of the brains. And he said, We want to be considered there, you know, you know, we want you to think less of the bank. Yeah, you mentioned Dr. Ellen Grant, that you interviewed. and we thank you for being with us as well.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
David Gledhill | PERSON | 0.99+ |
UPS | ORGANIZATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Denise | PERSON | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
DBS Bank | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
19 years | QUANTITY | 0.99+ |
Lawrence Livermore | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
95% | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
Nadella | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
34 billion dollars | QUANTITY | 0.99+ |
Ellen Grant | PERSON | 0.99+ |
ten trillion dollars | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
34 billion | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Boston Children's Hospital | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Ellen | PERSON | 0.99+ |
sixth year | QUANTITY | 0.99+ |
Harvard Medical School | ORGANIZATION | 0.99+ |
Walls | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Red Hat | TITLE | 0.99+ |
first day | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
four billion dollars | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
six years ago | DATE | 0.98+ |
2020 | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
OpenShift | ORGANIZATION | 0.98+ |
RHEL | TITLE | 0.98+ |
OpenShift | TITLE | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
Stu | PERSON | 0.98+ |
today | DATE | 0.98+ |
Franklin Middle School | ORGANIZATION | 0.98+ |
Jamie Thomas, IBM | IBM Think 2019
>> Live from San Francisco. It's theCube covering IBM Think 2019. Brought to you by IBM. >> Welcome back to Moscone Center everybody. The new, improved Moscone Center. We're at Moscone North, stop by and see us. I'm Dave Vellante, he's Stu Miniman and Lisa Martin is here as well, John Furrier will be up tomorrow. You're watching theCube, the leader in live tech coverage. This is day zero essentially, Stu, of IBM Think. Day one, the big keynotes, start tomorrow. Chairman's keynote in the afternoon. Jamie Thomas is here. She's the general manager of IBM's Systems Strategy and Development at IBM. Great to see you again Jamie, thanks for coming on. >> Great to see you guys as usual and thanks for coming back to Think this year. >> You're very welcome. So, I love your new role. You get to put on the binoculars sometimes the telescope. Look at the road map. You have your fingers in a lot of different areas and you get some advanced visibility on some of the things that are coming down the road. So we're really excited about that. But give us the update from a year ago. You guys have been busy. >> We have been busy, and it was a phenomenal year, Dave and Stu. Last year, I guess one of the pinnacles we reached is that we were named with our technology, our technology received the number one and two supercomputer ratings in the world and this was a significant accomplishment. Rolling out the number one supercomputer in Oakridge National Laboratory and the number two supercomputer in Lawrence Livermore Laboratory. And Summit as it's called in Oakridge is really a cool system. Over 9000 CPUs about 27,000 GPUs. It does 200 petaflops at peak capacity. It has about 250 petabytes of storage attached to it at scale and to cool this guy, Summit, I guess it's a guy. I'm not sure of the denomination actually it takes about 4,000 gallons of water per minute to cool the supercomputer. So we're really pleased with the engineering that we worked on for so many years and achieving these World records, if you will, for both Summit and Sierra. >> Well it's not just bragging rights either, right, Jamie? I mean, it underscores the technical competency and the challenge that you guys face I mean, you're number one and number two, that's not easy. Not easy to sustain of course, you got to do it again. >> Right, right, it's not easy. But the good thing is the design point of these systems is that we're able to take what we created here from a technology perspective around POWER9 and of course the patnership we did with Invidia in this case and the software storage. And we're able to downsize that significantly for commercial clients. So this is the world's largest artificial intlligence supercomputer and basically we are able to take that technology that we invented in this case 'cause they ended up being one of our first clients albeit a very large client, and use that across industries to serve the needs of artificial intelligence work loads. So I think that was one of the most significant elements of what we actually did here. >> And IBM has maintained, despite you guys selling off your microelectronics division years ago, you've maintained a lot of IP in the core processing and the design. You've also reached out certainly with open power, for example, to folks. You mentioned Invidia. But having that, sort of embracing that alternative processor mode as opposed to trying to jam everything in the die. Different philosophy that IBM is taking. >> Yeah we think that the workload specific processing is still very much in demand. Workloads are going to have different dimensions and that's what we really have focused on here. I don't think that this has really changed over the last decades of computing and so we're really focused on specialized computing purpose-built computing, if you will. Obviously using that on premise and also using that in our hybrid cloud strategies for clients that want to do that as well. >> What are some of the other cool things that you guys are working on that you can talk about. >> Well I would say last year was quite an interesting year in that from a mainframe perspective we delivered our first 19 inch form factor which allows us to fit nicely on a floor tile. Obviously allows clients to scale more effectively from a data center planning perspective. Allows us to have a cloud footprint, but with all the characteristics of security that you would normally expect in a mainframe system. But really tailored toward new workloads once again. So Linux form factor and going after the new workloads that a lot of these cloud data centers really need. One of our first and foremost focus areas continues to be security around that system and tomorrow there will be some announcements that will happen around Z security. I can't say what they are right now but you'll see that we are extending security in new ways to support more of these hybrid cloud scenarios. >> It's so funny. We were talking in one of our earlier segments talking about how the path of virtualization and trying to get lots of workloads into something and goes back to the device that could manage all workloads which was the Mainframe. So we've watched for many years system Z lots of Linux on there if you want to do some cool container, you know global Z that's an option, so it's interesting to watch while the pendulum swings in IT have happened the Z system has kept up with a lot of these innovations that have been going on in the industry. >> And you're right, one of our big focuses for the platform for Z and power of course is a container-based strategy. So we've created, you know last year we talked about secure container technology and we continue to evolve secure container technology but the idea is we want to eliminate any kind of friction from a developer's perspective. So if you want to design in a container-based environment then you're more easily able to port that technology or your applications, if you will to a Z mainframe environment if that's really what your target environment is. So that's been a huge focus. The other of course major invention that we announced at the Consumer Electronics show is our Quantum System One. And this represented an evolution of our Quantum system over the last year where we now have the world's really first self-contained universal quantum computer in a single form factor where we were able to combine the Quantum processor which is living in the dilution refrigerator. You guys remember the beautiful chandelier from last year. I think it's back this year. But this is all self-contained with it's electronics in a single form factor. And that really represents the evolution of the electronics in particular over the last year where we were able to miniaturize those electronics and get them into this differentiated form factor. >> What should people know about Quantum? When you see the demos, they explain it's not a binary one or zero, it could be either, a virtually infinite set of possibilities, but what should the lay person know about Quantum and try to understand? >> Well I think really the fundamental aspect of it is in today's world with traditional computers they're very powerful but they cannot solve certain problems. So when you look at areas like material science, areas like chemistry even some financial trading scenarios, the problems can either not be solved at all or they cannot be completed in the right amount of time. Particularly in the world of financial services. But in the area of chemistry for instance molecular modeling. Today we can model simple molecules but we cannot model something even as complex as caffeine. We simply don't have the traditional compute capacity to do that. A quantum computer will allow us once it comes to maturity allow us to solve these problems that are not solvable today and you can think about all the things that we could do if were able to have more sophisticated molecular modeling. All the kinds of problems we could solve probably in the world of pharmacology, material science which affects many, many industries right? People that are developing automobiles, people that are exploring for oil. All kinds of opportunities here in this space. The technology is a little bit spooky, I guess, that's what Einstein said when he first solved some of this, right? But it really represents the state of the universe, right? How the universe behaves today. It really is happening around us but that's what quantum mechanics helps us capture and when combined with IT technology the quantum computer can bring this to life over time. >> So one of the things that people point to is potentially a new security paradigm because Quantum can flip the way in which we do security on it's head so you got to be thinking around that as well. I know security is something that is very important to IBM's Systems division. >> Right, absolutely. So the first thing that happens when someone hears about quantum computing is they ask about quantum security. And as you can imagine there's a lot of clients here that are concerned about security. So in IBM research we're also working on quantum-safe encryption. So you got one team working on a quantum computer, you got another team ensuring that the data will be protected from the quantum computer. So we do believe we can construct quantum-safe encryption algorithms based on lattice-based technology that will allow us to encrypt data today and in the future when the quantum computer does reach that kind of capacity the data will be protected. So the idea is that we would start using these new algorithms far earlier than the computer could actually achieve this result but it would mean that data created today would be quantum safe in the future. >> You're kind of in your own arm's race internally. >> But it's very important. Both aspects are very important. To be able to solve these problems that we can't solve today, which is really amazing, right? And to also be able to protect our data should it be used in inappropriate ways, right? >> Now we had Ed Bausch on earlier today. Used to run the storage division. What's going on in that world? I know you've got your hands in that pie as well. What can you tell us about what's going on there? >> Well I believe that Ed and the team have made some phenomenal innovations in the past year around flash MVME technology and fusing that across product lines state-of-the-art. The other area that I think is particularly interesting of course is their data management strategy around things like Spectrum Discover. So, today we all know that many of our clients have just huge amounts of data. I visited a client last year that interesting enough had 1 million tapes, and of course we sell tapes so that's a good thing but then how do you deal and manage all the data that is on 1 million tapes. So one of the inventions that the team has worked on is a metadata tagging capability that they've now shipped in a product called spectrum discover. And that allows a client to have a better way to have a profile of their data, data governance and understand for different use cases like data governance or compliance how do they pull back the right data and what does this data really mean to them. So have a better lexicon of their data, if you will than what they can do in today's world. So I think that's very important technology. >> That's interesting. I would imagine that metadata could sit in Flash somewhere and then inform the serial technology to maybe find stuff faster. I mean, everybody thinks tape is slow because it's sequential. But actually if you do some interesting things with metadata you can-- >> There's all kinds of things you can do I mean it's one thing to have a data ocean if you will, but then how do you really get value out of that data over a long period of time and I think we're just the tip of the spear in understanding the use cases that we can use this technology for. >> Jamie, how does IBM manage that pipeline of innovation. I think we heard very specific examples of how the super computers drive HPC architectures which everybody is going to use for their AI infrastructure. Something like quantum computing is a little bit more out there. So how do you balance kind of the research through the product and what's going to be more useful to users today. >> Yeah, well, that's an interesting question. So IBM is one of the few organizations in the world really that have an applied research organization still. And Dario Gil is here this week he manages our research organization now under Arvind Krishna. An organization like IBM Systems has a great relationship with research. Research are the folks that had people working on Quantum for decades, right? And they're the reason that we are in a position now to be able to apply this in the way that we are. The great news is that along the way we're always working on a pipeline of this next generation set of technologies and innovations. Some of them succeed and some of them don't. But without doing that we would not have things like Quantum. We would not have advanced encryption capability that we pushed all the way down into our chips. We would not have quantum-safe encryption. Things like the metadata tagging that I talked about came out of IBM research. So it's working with them on problems that we see coming down the pipe, if you will that will affect our clients and then working with them to make sure we get those into the product lines at the right amount of time. I would say that Quantum is the ultimate partnership between IBM Systems and IBM research. We have one team in this case that are working jointly on this product. Bringing the skills to bear that each of us have on this case with them having the quantum physics experts and us having the electronics experts and of course the software stacks spanning both organizations is really a great partnership. >> Is there anything you could tell us about what's going on at the edge. The edge computing you hear a lot about that today. IBM's got some activities going on there? You haven't made huge splashes there but anything going on in research that you can share with us, or any directions. >> Well I believe the edge is going to be a practical endeavor for us and what I mean by that is there are certain use cases that I think we can serve very well. So if we look at the edge as perhaps a factory environment, we are seeing opportunities for our storaging compute solutions around the data management out in some of these areas. If you look at the self-driving automobile for instance, just design something like that can easily take over a hundred petabytes of data. So being able to manage the data at the edge, being able to then to provide insight appropriately using AI technologies is something we think we can do and we see that. I own factories based on what I do and I'm starting to use AI technology. I use Power AI technology in my factories for visual inspection. Think about a lot of the challenges around provenance of parts as well as making sure that they're finally put together in the right way. Using these kind of technologies in factories is just really an easy use case that we can see. And so what we anticipate is we will work with the other parts of IBM that are focused on edge as well and understand which areas we think our technology can best serve. >> That's interesting you mention visual inspection. That's an analog use case which now you're transforming into digital. >> Yeah well Power AI vision has been very successful in the last year . So we had this power AI package of open source software that we pulled together but we drastically simplified the use of this software, if you will the ability to use it deploy it and we've added vision capability to it in the last year. And there's many use cases for this vision capability. If you think about even the case where you have a patient that is in an MRI. If you're able to decrease the amount of time they stay in the MRI in some cases by less fidelity of the picture but then you've got to be able to interpret it. So this kind of AI and then extensions of AI to vision is really important. Another example for Power AI vision is we're actually seeing use cases in advertising so the use case of maybe you're at a sporting event or even a busy place like this where you're able to use visual inspection techniques to understand the use of certain products. In the case of a sporting event it's how many times did my logo show up in this sporting event, right? Particularly our favorite one is Formula One which we usually feature the Formula One folks here a little bit at the events. So you can see how that kind of technology can be used to help advertisers understand the benefits in these cases. >> Got it. Well Jamie we always love having you on because you have visibility into so many different areas. Really thank you for coming and sharing a little taste of what's to come. Appreciate it. >> Well thank you. It's always good to see you and I know it will be an exciting week here. >> Yeah, we're very excited. Day zero here, day one and we're kicking off four days of coverage with theCube. Jamie Thomas of IBM. I'm Dave Vellante, he's Stu Miniman. We'll be right back right after this short break from IBM Think in Moscone. (upbeat music)
SUMMARY :
Brought to you by IBM. She's the general manager of IBM's Systems Great to see you on some of the things that the pinnacles we reached and the challenge that you guys face and of course the patnership we did in the core processing and the design. over the last decades of computing on that you can talk about. that you would normally that have been going on in the industry. And that really represents the the things that we could do So one of the things that So the idea is that we would start using You're kind of in your that we can't solve today, hands in that pie as well. that the team has worked on But actually if you do the use cases that we can the super computers in the way that we are. research that you can share Well I believe the edge is going to be That's interesting you the use of this software, if you will Well Jamie we always love having you on It's always good to see you days of coverage with theCube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jamie Thomas | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jamie | PERSON | 0.99+ |
Einstein | PERSON | 0.99+ |
Dario Gil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
last year | DATE | 0.99+ |
Today | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
200 petaflops | QUANTITY | 0.99+ |
IBM Systems | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
1 million tapes | QUANTITY | 0.99+ |
Moscone | LOCATION | 0.99+ |
Oakridge | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
one team | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
Both aspects | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
Over 9000 CPUs | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
day one | QUANTITY | 0.98+ |
both organizations | QUANTITY | 0.98+ |
about 27,000 GPUs | QUANTITY | 0.98+ |
first 19 inch | QUANTITY | 0.98+ |
Summit | ORGANIZATION | 0.98+ |
Linux | TITLE | 0.97+ |
about 250 petabytes | QUANTITY | 0.97+ |
past year | DATE | 0.97+ |
Day zero | QUANTITY | 0.96+ |
over a hundred petabytes | QUANTITY | 0.96+ |
Moscone North | LOCATION | 0.95+ |
Sierra | ORGANIZATION | 0.95+ |
single form factor | QUANTITY | 0.95+ |
Moscone Center | LOCATION | 0.94+ |
first clients | QUANTITY | 0.93+ |
decades | QUANTITY | 0.93+ |
this week | DATE | 0.93+ |
One | QUANTITY | 0.93+ |
Ed Bausch | PERSON | 0.93+ |
Phillip Adams, National Ignition Facility | Splunk .conf18
>> Narrator: Live from Orlando, Florida, it's theCUBE covering .conf18. Brought to you by Splunk. >> Welcome back to Orlando, everybody, of course home of Disney World. I'm Dave Vellante with Stu Miniman. We're here covering Splunk's Conf18, #conf, sorry, #splunkconf18, I've been fumbling that all week, Stu. Maybe by day two I'll have it down. But this is theCUBE, the leader in live tech coverage. Phillip Adams is here, he's the CTO and lead architect for the National Ignition Facility. Thanks for coming on. >> Thanks for having me. >> Super-interesting off-camera conversation. You guys are basically responsible for keeping the country's nuclear arsenal functional and secure. Is that right? >> Phillip: And effective. >> And effective. So talk about your mission and your role. >> So the mission of the National Ignition Facility is to provide data to scientists of how matter behaves under high pressures and high temperatures. And so what we do is basically take 192 laser beams of the world's largest laser in a facility about the size of three football fields and run that through into a target the size of a B.B. that's filled with deuterium and tritium. And that implosion that we get, we have diagnostics around that facility that collect what's going on for that experiment and that data goes off to the scientists. >> Wow, okay. And what do they do with it? They model it? I mean that's real data, but then they use it to model real-world nuclear stores? >> Some time back if you actually look on Google Earth and you look over Nevada you'll see a lot of craters in the desert. And we aren't able to do underground nuclear testing anymore, so this replaces that. And it allows us to be able to capture, by having a small burning plasma in a lab you can either simulate what happens when you detonate a nuclear warhead, you can find out what happens, if you're an astrophysicist, understand what happens from the birth of a star to full supernova. You can understand what happens to materials as they get subjected to, you know, 100 million degrees. (laughs) >> Dave: For real? >> Phillip: For real. >> Well, so now some countries, North Korea in particular, up until recently were still doing underground testing. >> Correct. >> Are you able to, I don't know, in some way, shape or form, monitor that? Or maybe there's intelligence that you can't talk about, but do you learn from those? Or do you already know what's going on there because you've been through it decades ago? >> There are groups at the lab that know things about things but I'm not at liberty to talk about that. (laughs) >> Dave: (chuckles) I love that answer. >> Stu: Okay. >> Go ahead, Stu. >> Maybe you could talk a little bit about the importance of data. Your group's part of Lawrence Livermore Labs. I've loved geeking out in my career to talk to your team, really smart people, you know, some sizeable budgets and, you know, build, you know, supercomputers and the like. So, you know, how important is data and, you know, how's the role of data been changing the last few years? >> So, data's very critical to what we do. That whole facility is designed about getting data out. And there are two aspects of data for us. There's data that goes to the scientists and there's data about the facility itself. And it's just amazing the tremendous amount of information that we collect about the facility in trying to keep that facility running. And we have a whole just a line out the door and around the corner of scientists trying to get time on the laser. And so the last thing IT wants to be is the reason why they can't get their experiment off. Some of these experimentalists are waiting up to like three, four years to get their chance to run their experiment, which could be the basis of their scientific career that they're studying for that. And so, with a facility that large, 66 thousand control points, you can consider it 66 thousand IOT points, that's a lot of data. And it's amazing some days that it all works. So, you know, by being able to collect all that information into a central place we can figure out which devices are starting to misbehave, which need servicing and make sure that the environment is functional as well as reproducible for the next experiment. >> Yeah well you're a case-in-point. When you talk about 66 thousand devices, I can't have somebody going manually checking everything. Just the power of IOT, is there predictive things that let you know if something's going to break? How do you do things like break-fix? >> So we collect a lot of data about those end-point devices. We have been collecting them and looking at that data into Splunk and plotting that over time, all the way from, like, capacitors to motor movements and robot behavior that is going on in the facility. So you can then start getting trends for what average looks like and when things start deviating from norm and set a crew of technicians that'll go in there on our maintenance days to be able to replace components. >> Phillip what are you architecting? Is it the data model, kind of the ingest, the analyze, the dissemination, the infrastructure, the collaboration platform, all of the above? Maybe you could take us inside. >> I am the infrastructure architect, the lead infrastructure architect, so I have other architects that work with me, for database, network, sys admin, et cetera. >> Okay, and then so the data, presumably, informs what the infrastructure needs to looks like, right, i.e. where the data is, is it centralized, de-centralized, how much is it, et cetera. Is that a fair assertion? >> I would say the machine defines what the architecture needs to look like. The business processes change for that, you know, in terms of like, well how do you protect and secure a SCADA environment, for example. And then for the nuances of trying to keep a machine like that continually running and separated and segregated as need be. >> Is what? >> As need be. >> Yeah, what are the technical challenges of doing that? >> Definitely, you know, one challenge is that the Department of Energy never really shares data to the public. And for, you know, it's not like NASA where you take a picture and you say, here you go, right. And so when you get sensitive information it's a way of being able to dissect that out and say, okay well now we've got to use our community of folks that now want to come in remotely, take their data and go. So we want to make sure we do that in a secure manner and also that protects scientists that are working on a particular experiment from another scientist working on their experiment. You know, we want to be able to keep swim lanes, you know, very separated and segregated. Then you get into just, you know, all of these different components, IT, the general IT environment likes to age out things every five years. But our project is, you know, looking at things on a scale of 30 years. So, you know, the challenges we deal with on a regular basis for example are protocols getting decommissioned. And not all the time because, you know, the protocol change doesn't mean that you want to spend that money to redesign that IOT device anymore, especially when you might have a warehouse full of them and then back-up, yeah. >> So obviously you're trying to provide access to those who have the right to see it, like you say, swim lanes get data to the scientists. But you also have a lot of bad guys who would love to get their hands on that data. >> Phillip: That's right. >> So how do you use, I presume you use Splunk at least in part in a security context, is that right? >> Yeah, we have a pretty sharp cyber security team that's always looking at the perimeter and, you know, making sure that we're doing the right things because, you know, there are those of us that are builders and there are those that want to destroy that house of cards. So, you know, we're doing everything we can to make sure that we're keeping the nation's information safe and secure. >> So what's the culture like there? I mean, do you got to be like a PhD to work there? Do you have to have like 15 degrees, CS expert? I mean, what's it like? Is it a diverse environment? Describe it to us. >> It is a very diverse environment. You've got PhD's working with engineers, working with you know, IT people, working with software developers. I mean, it takes an army to making a machine like this work and, you know, it takes a rigid schedule, a lot of discipline but also, you know, I mean everybody's involved in making the mission happen. They believe in it strongly. You know, for myself I've been there 15 years. Some folks have been there working at the lab 35 years plus, so. >> All right, so you're a Splunk customer but what brings you to .conf? You know, what do you look to get out of this? Have you been to these before? >> Ah yes, you know, so at .conf, you know, I really enjoy the interactions with other folks that have similar issues and missions that we do. And learning what they have been doing in order to address those challenges. In addition staying very close with technology, figuring out how we can leverage the latest and greatest items in our environment is what's going to make us not only successful but a great payoff for the American taxpayer. >> So we heard from Doug Merritt this morning that data is messy and that what you want to be able to do is be able to organize the data when you need to. Is that how you guys are looking at this? Is your data messy? You know, this idea of schema on read. And what was life like, and you may or may not know this, kind of before Splunk and after Splunk? >> Before Splunk, you know, we spent a lot of time in traditional data warehousing. You know, we spent a lot of time trying to figure out what content we wanted to go after, ETL, and put that data sets into rows and tables, and that took a lot of time. If there was a change that needed to happen or data that wasn't on-boarded, you couldn't get the answer that you needed. And so it took a long time to actually deliver an answer about what's going on in the environment. And today, you know one of the things that resonated with me is that we are putting data in now, throwing it in, getting it into an index and, you know, almost at the speed of thought, then being able to say, okay, even though I didn't properly on-board that data item I can do that now, I can grab that, and now I can deliver the answer. >> Am I correct that, I mean we talk to a lot of practitioners, they'll tell you that when you go back a few years, their EDW they would say was like a snake swallowing a basketball. They were trying to get it to do things that it really just wasn't designed to do, so they would chase intel every time intel came up with a new chip, hey we need that because we're starved for horsepower. At the same time big data practitioners would tell you, we didn't throw out our EDW, you know, it has its uses. But it's the right tool for the right job, the horses for courses as they say. >> Phillip: Correct. >> Is that a fair assessment? >> That is exactly where we're in. We're in very much a hybrid mode to where we're doing both. One thing I wanted to bring up is that the message before was always that, you know, the log data was unstructured content. And I think, you know, Splunk turned that idea on its head and basically said there is structure in log data. There is no such thing as unstructured content. And because we're able to rise that information up from all these devices in our facility and take relational data and marry that together through like DB Connect for example, it really changed the game for us and really allowed us to gain a lot more information and insight from our systems. >> When they talked about the enhancements coming out in 7.2 they talked about scale, performance and manageability. You've got quite a bit of scale and, you know, I'm sure performance is pretty important. How's Splunk doing? What are you looking for them to enhance their environment down the road, maybe with some of the things they talked about in the Splunk Next that would make your job easier? >> One of the things I was really looking forward to that I see that the signs are there for is being able to roll off buckets into the cloud. So, you know, the concept of being able to use S3 is great, you know, great news for us. You know, another thing we'd like to be able to do is store longer-lived data sets in our environment in longer time series data sets. And also annotate a little bit more, so that, you know, a scientist that sees a certain feature in there can annotate what that feature meant, so that when you have to go through the process of actually doing a machine-learning, you know, algorithm or trying to train a data set you know what data set you're trying to look for or what that pattern looks like. >> Why the S3, because you need a simple object store, where the GET PUT kind of model and S3 is sort of a de facto standard, is that right? >> Pretty much, yeah, that and also, you know, if there was a path to, let's say, Glacier, so all the frozen buckets have a place to go. Because, again, you never know how deep, how long back you'll have to go for a data set to really start looking for a trend, and that would be key. >> So are you using Glacier? >> Phillip: Not very much right now. >> Yeah, okay. >> There are certain areas my counterparts are using AWS quite a bit. So Lawrence Livermore has a pretty big Splunk implementation out on AWS right now. >> Yeah, okay, cool. All right, well, Phillip thank you so much for coming on theCUBE and sharing your knowledge. And last thoughts on conf18, things you're learning, things you're excited about, anything you can talk about. >> (laughs) No, this is a great place to meet folks, to network, to also learn different techniques in order to do, you know, data analysis and, you know, it's been great to just be in this community. >> Dave: Great, well thanks again for coming on. I appreciate it. >> Thank you. >> All right, keep it right there, everybody. Stu and I will be right back with our next guest. We're in Orlando, day 1 of Splunk's conf18. You're watching theCUBE.
SUMMARY :
Brought to you by Splunk. for the National Ignition Facility. You guys are basically responsible for keeping the country's And effective. And that implosion that we get, we have diagnostics And what do they do with it? as they get subjected to, you know, 100 million degrees. Well, so now some countries, North Korea in particular, There are groups at the lab that know things about things So, you know, how important is data and, you know, So, you know, by being able to collect all that information that let you know if something's going to break? and robot behavior that is going on in the facility. Phillip what are you architecting? I am the infrastructure architect, the lead infrastructure Is that a fair assertion? The business processes change for that, you know, And not all the time because, you know, the protocol change But you also have a lot of bad guys who would love and, you know, making sure that we're doing the right things I mean, do you got to be like a PhD to work there? a lot of discipline but also, you know, You know, what do you look to get out of this? Ah yes, you know, so at that data is messy and that what you want to be able to do getting it into an index and, you know, almost at the speed we didn't throw out our EDW, you know, it has its uses. the message before was always that, you know, You've got quite a bit of scale and, you know, the process of actually doing a machine-learning, you know, Pretty much, yeah, that and also, you know, So Lawrence Livermore has a pretty big Splunk implementation All right, well, Phillip thank you so much in order to do, you know, data analysis and, you know, I appreciate it. Stu and I will be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Phillip Adams | PERSON | 0.99+ |
Phillip | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Doug Merritt | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Nevada | LOCATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Department of Energy | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
15 degrees | QUANTITY | 0.99+ |
100 million degrees | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
192 laser beams | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
35 years | QUANTITY | 0.98+ |
Splunk | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Splunk | ORGANIZATION | 0.98+ |
one challenge | QUANTITY | 0.98+ |
four years | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
National Ignition Facility | ORGANIZATION | 0.96+ |
decades ago | DATE | 0.96+ |
66 thousand control points | QUANTITY | 0.95+ |
Disney World | LOCATION | 0.95+ |
intel | ORGANIZATION | 0.95+ |
three football fields | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
day two | QUANTITY | 0.92+ |
66 thousand IOT | QUANTITY | 0.91+ |
Google Earth | TITLE | 0.88+ |
Glacier | TITLE | 0.88+ |
.conf | OTHER | 0.87+ |
Lawrence Liverm | PERSON | 0.87+ |
Splunk | PERSON | 0.86+ |
five years | QUANTITY | 0.84+ |
this morning | DATE | 0.83+ |
One thing | QUANTITY | 0.81+ |
66 thousand devices | QUANTITY | 0.8+ |
Splunk | OTHER | 0.78+ |
American | LOCATION | 0.75+ |
DB Connect | TITLE | 0.74+ |
tritium | OTHER | 0.73+ |
North Korea | ORGANIZATION | 0.69+ |
theCUBE | ORGANIZATION | 0.69+ |
last | DATE | 0.68+ |
day 1 | QUANTITY | 0.65+ |
National Ignition | ORGANIZATION | 0.64+ |
conf18 | TITLE | 0.63+ |
years | DATE | 0.61+ |
SCADA | ORGANIZATION | 0.57+ |
lot of | QUANTITY | 0.54+ |
Conf18 | TITLE | 0.43+ |
years | QUANTITY | 0.4+ |
.conf18 | OTHER | 0.37+ |
EDW | COMMERCIAL_ITEM | 0.37+ |
Stefanie Chiras, IBM | IBM Think 2018
>> Narrator: Live, from Las Vegas, it's theCUBE. Covering IBM Think, 2018. Brought to you by IBM >> Hello everyone, welcome back to theCUBE, we are here on the floor at IBM Think 2018 in theCUBE studios, live coverage from IBM Think. I'm John Furrier, the host of theCUBE, and we're here with Stefanie Chiras, who is the Vice President of Offering Management IBM Cognitive Systems, that's Power Systems, a variety of other great stuff, real technology performance happening with Power, it's been a good strategic bet for IBM. Stefanie, great to see you again, thanks for coming back on theCUBE. >> Absolutely, I love to be on, John, thank you for inviting me. >> When we we had a brief (mumbles) Bob Picciano, who's heading up Power and that group, one of the things we learned is there's a lot of stuff going on that's really going to be impacting the performance of things. Just take a minute to explain what you guys are offering in this area. Where does it fit into the IBM portfolio? What's the customer use cases? Where does that offering fit in? >> Yeah, absolutely. So I think here at Think it's been a great chance for us to see how we have really transformed. You know, we have been known in the market for AIX and IBMI. We continue to drive value in that space. We just GA'd on, yesterday, our new systems, based Power9 Processor chip for AIX and IBMI in Linux. So that remains a strong strategic push. Enterprise Linux. We transformed in 2014 to embrace Linux wholeheartedly, so we really are going after now the Linux base. SAP HANA has been an incredible workload where over a thousand customers run in SAP HANA. And boy we are going after this cognitive and AI space with our performance and our acceleration capabilities, particularly around GPUs, so things like unique differentiation in our NVLink is driving our capabilities with some great announcements here that we've had in the last couple of days. >> Jamie Thomas was on earlier, and she and I were talking about some of the things around really the software stack and the hardware kind of coming together. Can you just break that out? Because I know Power, we've been covering it, Doug Balog's been on many times. A lot of great growth right out of the gate. Ecosystem formed right around it. What else has happened? And separate out where the hardware innovation is and technology and what's software and how the ecosystem and people are adopting it. Can you just take us through that? >> Yeah, absolutely. And actually I think it's an interesting question because the ecosystem actually has happened on both sides of the fence, with both the hardware side and the software side, so OpenPOWER has grown dramatically on the hardware side. We just released our Power9 processor chip, so here is our new baby. This is the Power9. >> Hold it up. >> So this is our Power9 here, 8 billion transistors, 14 miles of wiring and 17 layers of metal, I mean it's a technology wonder. >> The props are getting so small we can't even show on the camera. (laughing) >> This is the Moore's Law piece that Jenny was talking about in her keynote. >> That's exactly it. But what we have really done strategically is changed what gets delivered from the CPU to more what gets delivered at a system level, and so our IO capabilities. First chip to market, delivering the first systems to market with PCIe Gen 4. So able to connect to other things much faster. We have NVLink 2.0, which provides nearly 10x the bandwidth to transport data between this chip and a GPU. So Jensen was onstage yesterday from NVIDIA. He held up his chip proudly as well. The capabilities that are coming out from being able to transport data between the power CPU and the GPU is unbelievable. >> Talk about the relationship with NVIDIA for a second, 'cause that's also, NVIDIA stocks up a lot of (mumbles) the bitcoin mining graphics card, but this is, again, one use case, NVIDIA's been doing very well, they're doing really well in IOT, self-driving cars, where data performance is critical. How do you guys play in that? What's the relationship with NVIDIA? >> Yeah, so it has been a great partnership with NVIDIA. When we launched in 2013, right at the end of 2013 we launched OpenPOWER, NVIDIA was one of the five founding members with us, Google, Mellanox, and Tyan. So they clearly wanted to change the game at the systems value level. We launched into that with we went and jointly bid with NVIDIA and Mellanox, we jointly bid for the Department of Energy when we co-named it Coral. But that came to culmination at the end of last year when we delivered the Summit and Sierra supercomputers to Oak Ridge and Lawrence Livermore. We did that with innovation from both us and NVIDIA, and that's what's driving things like this capability. And now we bring in software that exploits it. So that NVLink connection between the CPU and the GPU, we deliver software called PowerAI, we've optimized the frameworks to take advantage of that data transport between that CPU and GPU so it makes it consumable. With all of these things it's not just about the technology, it's about is it easy to consume at the software level? So great announcement yesterday with the capabilities to do logistic regression. Unbelievable, taking the ability to do advertising analytics, taking it from 70 minutes to 1 and 1/2. >> I mean we're going to geek out here. But let's go under the hood for a second. This is a really kind of a high end systems product, at the kind of performance levels. Where does that connect to the go to market? Who's the buyer of it? Is it OEMs? Is it integrators? Is it new hardware devices? How do I get involved and who's the target customer? And what kind of developers are you reaching? Can you just take us through that who's buying this product? >> So this is no longer relegated to the elite set. What we did, and I think this is amazing, when we delivered the Summit and Sierra, right? Huge cluster of these nodes. We took that same node, we pulled it into our product line as the AC922, and we delivered a 4 GPU air-cooled version to market. On December 22nd we GA'd, of last year. And we sold to over 40 independent clients by the end of 2017, so that's a short runway. And most of it, honestly, is all driven around AI. The AI adoption, and it's a cross enterprise. Our goal is really to make sure that the enterprises who are looking at AI now with their developer are ready to take it into production. We offer support for the frameworks on the system so they know that when they do development on this infrastructure, they can take it to production later. So it's very much driven toward taking AI to the enterprise, and it's all over. It's insurance, it's financial services sector. It's those kinds of enterprise that are using AI. >> So IO sensitive, right? So IOT not a target or maybe? >> So you know when we talk out to edge it's a little bit different, right? So the IOT today for us is driving a lot of data, that's coming in, and then you know at different levels-- >> There's not a lot of (mumbles) power needed at the edge. >> There is not, there is not. And it kind of scales in. We are seeing, I would say, kind of progression of that compute moving out closer. Whether or not it's on, it doesn't all come home necessarily anymore. >> Compute is being pushed to where the data is. >> Stefanie: Absolutely right. >> That's head room for you guys. Not a priority now because there's not an intense (mumbles) compute can solve that. >> Stefanie: That's right. >> All right, so where does the Cloud fit into it? You guys powering IBMs Cloud? >> So IBM Cloud has been a great announcement this year as well. So you've seen the focus here around AI and Cloud. So we announced that HANA will come on Power into the Cloud, specializing in large memory sets, so 24 terabyte memory sets. For clients that's huge to be able to exploit that-- >> Is IBM Cloud using Power or not? >> That will be in IBM Cloud. So go to IBM Cloud, be able to deploy an SAP certified HANA on Power deployment for large memory installs, which is great. We also announced PowerAI access, on Power9 technology in IBM Cloud. So we definitely are partnering both with IMB Cloud as well as with the analytics pieces. Data Science Experience available on Power. And I think it's very important, what you said earlier, John, about you want to bring the capabilities to where the data is. So things like a lot of clients are doing AI on prem where we can offer a solution. You can augment that with capabilities like Watson, right? Off prem. You can also do dev ops now with AI in the IBM Cloud. So it really becomes both a deployment model, but the client needs to be able to choose how they want to do it. >> And the data can come from multiple sources. There's always going to be latencies. So what about blockchain? I want to get to blockchain. Are you guys doing anything in the blockchain ecosystem? Obviously one complaint we've been hearing, obviously, is some of these cryptocurrency chains like Ethereum, has performance issues, they got projects coming out. A lot of open source in there. Is Power even puttin' their toe in the water with blockchain? >> We have put our toe in the water. Blockchain runs on Power. From an IBM portfolio perspective-- >> IBM blockchain runs on Power or blockchain, or other blockchains? >> Like Hyperledger. Like Hyperledger will run. So open source, blockchain will run on Power, but if you look at the IBM portfolio, the security capabilities in Z14 that that brings and pulling that into IBM Cloud, our focus is really to be able to deliver that level of security. So we lead with system Z in that space, and Z has been incredible with blockchain. >> Z is pretty expensive to purchase, though. >> But now you can purchase it in the Cloud through IBM Cloud, which is great. >> Awesome, this is the benefit of the Cloud. Sounds like soft layer is moving towards more of a Z mainframe, Power, backend? >> I think the IBM Cloud is broadening the capabilities that it has, because the workloads demand different things. Blockchain demands security. Now you can get that in the Cloud through Z. AI demands incredible compute strength with GPU acceleration, Power is great for that. And now a client doesn't have to choose. They can use the Cloud and get the best infrastructure for the workload they want, and IBM Cloud runs it. >> You guys have been busy. >> We've been busy. (laughing) >> Bob Picciano's been bunkered in. You guys have been crankin' out... love to do a deeper dive on this, Stefanie, and so we'd love to follow up with you guys, and we told Bob we would dig into that, too. Question I have for you now is, how do you talk about this group that you're building together? You know, the names are all internal IBM names, Power... Is it like a group? Do you guys call yourself like the modern infrastructure group? Is it like, what is it called, if you had to explain it to outside IBM, AIs easy, I know what AI team does. You're kind of doing AI. You're enabling AI. Are you a modern infrastructure? What is the pillar are you under? >> Yeah, so we sit under IBM systems, and we are definitely systems proud, right? Everything runs on infrastructure somewhere. And then within that three spaces you certainly have Z storage, and we empower, since we've set our sites on AI and cognitive workloads, internally we're called IBM Cognitive Systems. And I think that's really two things, both a focus on the workloads and differentiation we want to bring to clients, but also the fact that it's not just about the hardware, we're now doing software with things like PowerAI software, optimized for our hardware. There's magic that happens when the software and the hardware are co-optimized. >> Well if you look, I mean systems proud, I love that conversation because you look at the systems revolution that I grew up in, the computer science generation of the 80s, that was the open movement, BSD, pre-Linux, and then now everything about the Cloud and what's going on with AI and what I call the innovation sandwich with data in the middle and blockchain and AI as bread. >> Stefanie: Yep. >> You have all the perfect elements of automation, you know, Cloud. That's all going to be powered by a system. >> Absolutely. >> Especially operating systems skills are super imprtant. >> Super important. Super important. >> This is the foundational elements. >> Absolutely, and I think your point on open, that has really come in and changed how quickly this innovation is happening, but completely agree, right? And we'll see more fit for purpose types of things, as you mentioned. More fit for purpose. Where the infrastructure and the OS are driving huge value at a workload level, and that's what the client needs. >> You know, what dev ops proved with the Cloud movement was you can have programmable infrastructure. And what we're seeing with blockchain and decentralized web and AI, is that the real value, intellectual property, is going to be the business logic. That is going to be dealing with now a whole 'nother layer of programmability. It used to be the other way around. The technology determined >> That's right. >> the core decision, so the risk was technology purchase. Now that this risk is business model decision, how do you code your business? >> And it's very challenging for any business because the efficiency happens when those decisions get made jointly together. That's when real business efficiency. If you make one decision on one side of the line or the other side of the line only, you're losing efficiency that can be driven. >> And open is big because you have consensus algorithms, you got regulatory issues, the more data you're exposed to, and more horsepower that you have, this is the future, perfect storm. >> Perfect storm. >> Stefanie, thanks for coming on theCUBE, >> It's exciting. >> Great to see you. >> Oh my pleasure John, great to see you. >> You're awesome. Systems proud here in theCUBE, we're sharing all the systems data here at IBM Think. I'm John Furrier, more live coverage after this short break. All right.
SUMMARY :
Brought to you by IBM Stefanie, great to see you again, Absolutely, I love to be on, John, one of the things we learned is there's a lot of stuff We continue to drive value in that space. and how the ecosystem and people are adopting it. This is the Power9. So this is our Power9 here, we can't even show on the camera. This is the Moore's Law piece that Jenny was talking about delivering the first systems to market with PCIe Gen 4. Talk about the relationship with NVIDIA for a second, So that NVLink connection between the CPU and the GPU, Where does that connect to the go to market? So this is no longer relegated to the elite set. And it kind of scales in. That's head room for you guys. For clients that's huge to be able to exploit that-- but the client needs to be able to choose And the data can come from multiple sources. We have put our toe in the water. So we lead with system Z in that space, But now you can purchase it in the Cloud Awesome, this is the benefit of the Cloud. And now a client doesn't have to choose. We've been busy. and so we'd love to follow up with you guys, but also the fact that it's not just about the hardware, and what's going on with AI You have all the perfect elements of automation, Super important. Where the infrastructure and the OS are driving huge value That is going to be dealing with now a whole 'nother layer the core decision, so the risk was technology purchase. or the other side of the line only, and more horsepower that you have, great to see you. I'm John Furrier, more live coverage after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NVIDIA | ORGANIZATION | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
John | PERSON | 0.99+ |
December 22nd | DATE | 0.99+ |
Bob | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stefanie | PERSON | 0.99+ |
Jamie Thomas | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
IBM | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
14 miles | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
last year | DATE | 0.99+ |
17 layers | QUANTITY | 0.99+ |
70 minutes | QUANTITY | 0.99+ |
Doug Balog | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
IBM Think | ORGANIZATION | 0.99+ |
24 terabyte | QUANTITY | 0.99+ |
end of 2017 | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
both sides | QUANTITY | 0.99+ |
Tyan | ORGANIZATION | 0.99+ |
8 billion transistors | QUANTITY | 0.99+ |
Power9 | COMMERCIAL_ITEM | 0.99+ |
first systems | QUANTITY | 0.99+ |
IBM Cognitive Systems | ORGANIZATION | 0.99+ |
SAP HANA | TITLE | 0.99+ |
First chip | QUANTITY | 0.99+ |
Oak Ridge | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Department of Energy | ORGANIZATION | 0.99+ |
IBMs | ORGANIZATION | 0.98+ |
over 40 independent clients | QUANTITY | 0.98+ |
HANA | TITLE | 0.98+ |
five founding members | QUANTITY | 0.98+ |
SAP | ORGANIZATION | 0.98+ |
80s | DATE | 0.98+ |
Lawrence Livermore | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Hyperledger | ORGANIZATION | 0.97+ |
one complaint | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
1 | QUANTITY | 0.97+ |
over a thousand customers | QUANTITY | 0.96+ |
Think | ORGANIZATION | 0.95+ |
IBM Think 2018 | EVENT | 0.95+ |
4 GPU | QUANTITY | 0.95+ |
PCIe Gen 4 | OTHER | 0.94+ |