Image Title

Search Results for Los Alamos:

Mohan Rokkam & Greg Gibby | 4th Gen AMD EPYC on Dell PowerEdge: Virtualization


 

(cheerful music) >> Welcome to theCUBE's continuing coverage of AMD's 4th Generation EPYC launch. I'm Dave Nicholson, and I'm here in our Palo Alto studios talking to Greg Gibby, senior product manager, data center products from AMD, and Mohan Rokkam, technical marketing engineer at Dell. Welcome, gentlemen. >> Mohan: Hello, hello. >> Greg: Thank you. Glad to be here. >> Good to see each of you. Just really quickly, I want to start out. Let us know a little bit about yourselves. Mohan, let's start with you. What do you do at Dell exactly? >> So I'm a technical marketing engineer at Dell. I've been with Dell for around 15 years now and my goal is to really look at the Dell powered servers and see how do customers take advantage of some of the features we have, especially with the AMD EPYC processors that have just come out. >> Greg, and what do you do at AMD? >> Yeah, so I manage our software-defined infrastructure solutions team, and really it's a cradle to grave where we work with the ISVs in the market, so VMware, Nutanix, Microsoft, et cetera, to integrate the features that we're putting into our processors and make sure they're ready to go and enabled. And then we work with our valued partners like Dell on putting those into actual solutions that customers can buy and then we work with them to sell those solutions into the market. >> Before we get into the details on the 4th Generation EPYC launch and what that means and why people should care. Mohan, maybe you can tell us a little about the relationship between Dell and AMD, how that works, and then Greg, if you've got commentary on that afterwards, that'd be great. Yeah, Mohan. >> Absolutely. Dell and AMD have a long standing partnership, right? Especially now with EPYC series. We have had products since EPYC first generation. We have been doing solutions across the whole range of Dell ecosystem. We have integrated AMD quite thoroughly and effectively and we really love how performant these systems are. So, yeah. >> Dave: Greg, what are your thoughts? >> Yeah, I would say the other thing too is, is that we need to point out is that we both have really strong relationships across the entire ecosystem. So memory vendors, the software providers, et cetera, we have technical relationships. We're working with them to optimize solutions so that ultimately when the customer buys that, they get a great user experience right out of the box. >> So, Mohan, I know that you and your team do a lot of performance validation testing as time goes by. I suspect that you had early releases of the 4th Gen EPYC processor technology. What have you been seeing so far? What can you tell us? >> AMD has definitely knocked it out of the park. Time and again, in the past four generations, in the past five years alone, we have done some database work where in five years, we have seen five exit performance. And across the board, AMD is the leader in benchmarks. We have done virtualization where we would consolidate from five into one system. We have world records in AI, we have world records in databases, we have world records in virtualization. The AMD EPYC solutions has been absolutely performant. I'll leave you with one number here. When we went from top of Stack Milan to top of Stack Genoa, we saw a performance bump of 120%. And that number just blew my mind. >> So that prompts a question for Greg. Often we, in industry insiders, think in terms of performance gains over the last generation or the current generation. A lot of customers in the real world, however, are N - 2. They're a ways back, so I guess two points on that. First of all, the kinds of increases the average person is going to see when they move to this architecture, correct me if I'm wrong, but it's even more significant than a lot of the headline numbers because they're moving two generations, number one. Correct me if I'm wrong on that, but then the other thing is the question to you, Greg. I like very long complicated questions, as you can tell. The question is, is it okay for people to skip generations or make the case for upgrades, I guess is the problem? >> Well, yeah, so a couple thoughts on that first too. Mohan talked about that five X over the generation improvements that we've seen. The other key point with that too is that we've made significant process improvements along the way moving to seven nanocomputer to now five nanocomputer and that's really reducing the total amount of power or the performance per watt the customers can realize as well. And when we look at why would a customer want to upgrade, right? And I want to rephrase that as to why aren't you? And there is a real cost of not upgrading. And so when you look at infrastructure, the average age of a server in the data center is over five years old. And if you look at the most popular processors that were sold in that timeframe, it's 8, 10, 12 cores. So now you've got a bunch of servers that you need in order to deliver the applications and meet your SLAs to your end users, and all those servers pull power. They require maintenance. They have the opportunity to go down, et cetera. You got to pay licensing and service and support costs and all those. And when you look at all the costs that roll up, even though the hardware is paid for just to keep the lights on, and not even talking about the soft costs of unplanned downtime, and, "I'm not meeting your SLAs," et cetera, it's very expensive to keep those servers running. Now, if you refresh, and now you have processors that have 32, 64, 96 cores, now you can consolidate that infrastructure and reduce your total power bill. You can reduce your CapEx, you reduce your ongoing OpEx, you improve your performance, and you improve your security profile. So it really is more cost effective to refresh than not to refresh. >> So, Mohan, what has your experience been double clicking on this topic of consolidation? I know that we're going to talk about virtualization in some of the results that you've seen. What have you seen in that regard? Does this favor better consolidation and virtualized environments? And are you both assuring us that the ROI and TCO pencil out on these new big, bad machines? >> Greg definitely hit the nail on the head, right? We are seeing tremendous savings really, if you're consolidating from two generations old. We went from, as I said, five is to one. You're going from five full servers, probably paid off down to one single server. That itself is, if you look at licensing costs, which again, with things like VMware does get pretty expensive. If you move to a single system, yes, we are at 32, 64, 96 cores, but if you compare to the licensing costs of 10 cores, two sockets, that's still pretty significant, right? That's one huge thing. Another thing which actually really drives the thing is we are looking at security, and in today's environment, security becomes a major driving factor for upgrades. Dell has its own setups, cyber-resilient architecture, as we call it, and that really is integrated from processor all the way up into the OS. And those are some of the features which customers really can take advantage of and help protect their ecosystems. >> So what kinds of virtualized environments did you test? >> We have done virtualization across primary codes with VMware, but the Azure Stack, we have looked at Nutanix. PowerFlex is another one within Dell. We have vSAN Ready Nodes. All of these, OpenShift, we have a broad variety of solutions from Dell and AMD really fits into almost every one of them very well. >> So where does hyper-converged infrastructure fit into this puzzle? We can think of a server as something that contains not only AMD's latest architecture but also latest PCIe bus technology and all of the faster memory, faster storage cards, faster nicks, all of that comes together. But how does that play out in Dell's hyper-converged infrastructure or HCI strategy? >> Dell is a leader in hyper-converged infrastructure. We have the very popular VxRail line, we have the PowerFlex, which is now going into the AWS ecosystem as well, Nutanix, and of course, Azure Stack. With all these, when you look at AMD, we have up to 96 cores coming in. We have PCIe Gen 5 which means you can now connect dual port, 100 and 200 gig nicks and get line rate on those so you can connect to your ecosystem. And I don't know if you've seen the news, 200, 400 gig routers and switchers are selling out. That's not slowing down. The network infrastructure is booming. If you want to look at the AI/ML side of things, the VDI side of things, accelerator cards are becoming more and more powerful, more and more popular. And of course they need that higher end data path that PCIe Gen 5 brings to the table. GDDR5 is another huge improvement in terms of performance and latencies. So when we take all this together, you talk about hyper-converged, all of them add into making sure that A, with hyper-converged, you get ease of management, but B, just 'cause you have ease of management doesn't mean you need to compromise on anything. And the AMD servers effectively are a no compromise offering that we at Dell are able to offer to our customers. >> So Greg, I've got a question a little bit from left field for you. We covered Supercompute Conference 2022. We were in Dallas a couple of weeks ago, and there was a lot of discussion of the current processor manufacturer battles, and a lot of buzz around 4th Gen EPYC being launched and what's coming over the next year. Do you have any thoughts on what this architecture can deliver for us in terms of things like AI? We talk about virtualization, but if you look out over the next year, do you see this kind of architecture driving significant change in the world? >> Yeah, yeah, yeah, yeah. It has the real potential to do that from just the building blocks. So we have our chiplet architecture we call it. So you have an IO die and then you have your core complexes that go around that. And we integrate it all with our infinity fabric. That architecture allows you, if we wanted to, replace some of those CCDs with specific accelerators. And so when we look two, three, four years down the road, that architecture and that capability already built into what we're delivering and can easily be moved in. We just need to make sure that when you look at doing that, that the power that's required to do that and the software, et cetera, and those accelerators actually deliver better performance as a dedicated engine versus just using standard CPUs. The other things that I would say too is if you look at emerging workloads. So data center modernization is one of the buzzwords in cloud native, right? And these container environments, well, AMD'S architecture really just screams support for those type of environments, right? Where when you get into these larger core accounts and the consolidation that Mohan talked about. Now when I'm in a container environment, that blast radius so a lot of customers have concerns around, "Hey, having a single point of failure and having more than X number of cores concerns me." If I'm in containers, that becomes less of a concern. And so when you look at cloud native, containerized applications, data center modernization, AMD's extremely well positioned to take advantage of those use cases as well. >> Yeah, Mohan, and when we talk about virtualization, I think sometimes we have to remind everyone that yeah, we're talking about not only virtualization that has a full-blown operating system in the bucket, but also virtualization where the containers have microservices and things like that. I think you had something to add, Mohan. >> I did, and I think going back to the accelerator side of business, right? When we are looking at the current technology and looking at accelerators, AMD has done a fantastic job of adding in features like AVX-512, we have the bfloat16 and eight features. And some of what these do is they're effectively built-in accelerators for certain workloads especially in the AI and media spaces. And in some of these use cases we look at, for example, are inference. Traditionally we have used external accelerator cards, but for some of the entry level and mid-level use cases, CPU is going to work just fine especially with the newer CPUs that we are seeing this fantastic performance from. The accelerators just help get us to the point where if I'm at the edge, if I'm in certain use cases, I don't need to have an accelerator in there. I can run most of my inference workloads right on the CPU. >> Yeah, yeah. You know the game. It's an endless chase to find the bottleneck. And once we've solved the puzzle, we've created a bottleneck somewhere else. Back to the supercompute conversations we had, specifically about some of the AMD EPYC processor technology and the way that Dell is packaging it up and leveraging things like connectivity. That was one of the things that was also highlighted. This idea that increasingly connectivity is critically important, not just for supercomputing, but for high-performance computing that's finding its way out of the realms of Los Alamos and down to the enterprise level. Gentlemen, any more thoughts about the partnership or maybe a hint at what's coming in the future? I know that the original AMD announcement was announcing and previewing some things that are rolling out over the next several months. So let me just toss it to Greg. What are we going to see in 2023 in terms of rollouts that you can share with us? >> That I can share with you? Yeah, so I think look forward to see more advancements in the technology at the core level. I think we've already announced our product code name Bergamo, where we'll have up to 128 cores per socket. And then as we look in, how do we continually address this demand for data, this demand for, I need actionable insights immediately, look for us to continue to drive performance leadership in our products that are coming out and address specific workloads and accelerators where appropriate and where we see a growing market. >> Mohan, final thoughts. >> On the Dell side, of course, we have four very rich and configurable options with AMD EPYC servers. But beyond that, you'll see a lot more solutions. Some of what Greg has been talking about around the next generation of processors or the next updated processors, you'll start seeing some of those. and you'll definitely see more use cases from us and how customers can implement them and take advantage of the features that. It's just exciting stuff. >> Exciting stuff indeed. Gentlemen, we have a great year ahead of us. As we approach possibly the holiday seasons, I wish both of you well. Thank you for joining us. From here in the Palo Alto studios, again, Dave Nicholson here. Stay tuned for our continuing coverage of AMD's 4th Generation EPYC launch. Thanks for joining us. (cheerful music)

Published Date : Dec 14 2022

SUMMARY :

talking to Greg Gibby, Glad to be here. What do you do at Dell exactly? of some of the features in the market, so VMware, on the 4th Generation EPYC launch the whole range of Dell ecosystem. is that we need to point out is that of the 4th Gen EPYC processor technology. Time and again, in the the question to you, Greg. of servers that you need in some of the results that you've seen. really drives the thing is we have a broad variety and all of the faster We have the very popular VxRail line, over the next year, do you that the power that's required to do that in the bucket, but also but for some of the entry I know that the original AMD in the technology at the core level. and take advantage of the features that. From here in the Palo Alto studios,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GregPERSON

0.99+

Dave NicholsonPERSON

0.99+

AMDORGANIZATION

0.99+

Greg GibbyPERSON

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

8QUANTITY

0.99+

MohanPERSON

0.99+

32QUANTITY

0.99+

Mohan RokkamPERSON

0.99+

100QUANTITY

0.99+

200QUANTITY

0.99+

10 coresQUANTITY

0.99+

10QUANTITY

0.99+

DallasLOCATION

0.99+

120%QUANTITY

0.99+

two socketsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

12 coresQUANTITY

0.99+

two generationsQUANTITY

0.99+

2023DATE

0.99+

fiveQUANTITY

0.99+

64QUANTITY

0.99+

200 gigQUANTITY

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

five full serversQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two pointsQUANTITY

0.99+

400 gigQUANTITY

0.99+

EPYCORGANIZATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

one systemQUANTITY

0.99+

threeQUANTITY

0.99+

Los AlamosLOCATION

0.99+

next yearDATE

0.99+

NutanixORGANIZATION

0.99+

two generationsQUANTITY

0.99+

four yearsQUANTITY

0.98+

bothQUANTITY

0.98+

Azure StackTITLE

0.98+

five nanocomputerQUANTITY

0.98+

Armando Acosta, Dell Technologies and Matt Leininger, Lawrence Livermore National Laboratory


 

(upbeat music) >> We are back, approaching the finish line here at Supercomputing 22, our last interview of the day, our last interview of the show. And I have to say Dave Nicholson, my co-host, My name is Paul Gillin. I've been attending trade shows for 40 years Dave, I've never been to one like this. The type of people who are here, the type of problems they're solving, what they talk about, the trade shows are typically, they're so speeds and feeds. They're so financial, they're so ROI, they all sound the same after a while. This is truly a different event. Do you get that sense? >> A hundred percent. Now, I've been attending trade shows for 10 years since I was 19, in other words, so I don't have necessarily your depth. No, but seriously, Paul, totally, completely, completely different than any other conference. First of all, there's the absolute allure of looking at the latest and greatest, coolest stuff. I mean, when you have NASA lecturing on things when you have Lawrence Livermore Labs that we're going to be talking to here in a second it's a completely different story. You have all of the academics you have students who are in competition and also interviewing with organizations. It's phenomenal. I've had chills a lot this week. >> And I guess our last two guests sort of represent that cross section. Armando Acosta, director of HPC Solutions, High Performance Solutions at Dell. And Matt Leininger, who is the HPC Strategist at Lawrence Livermore National Laboratory. Now, there is perhaps, I don't know you can correct me on this, but perhaps no institution in the world that uses more computing cycles than Lawrence Livermore National Laboratory and is always on the leading edge of what's going on in Supercomputing. And so we want to talk to both of you about that. Thank you. Thank you for joining us today. >> Sure, glad to be here. >> For having us. >> Let's start with you, Armando. Well, let's talk about the juxtaposition of the two of you. I would not have thought of LLNL as being a Dell reference account in the past. Tell us about the background of your relationship and what you're providing to the laboratory. >> Yeah, so we're really excited to be working with Lawrence Livermore, working with Matt. But actually this process started about two years ago. So we started looking at essentially what was coming down the pipeline. You know, what were the customer requirements. What did we need in order to make Matt successful. And so the beauty of this project is that we've been talking about this for two years, and now it's finally coming to fruition. And now we're actually delivering systems and delivering racks of systems. But what I really appreciate is Matt coming to us, us working together for two years and really trying to understand what are the requirements, what's the schedule, what do we need to hit in order to make them successful >> At Lawrence Livermore, what drives your computing requirements I guess? You're working on some very, very big problems but a lot of very complex problems. How do you decide what you need to procure to address them? >> Well, that's a difficult challenge. I mean, our mission is a national security mission dealing with making sure that we do our part to provide the high performance computing capabilities to the US Department of Energy's National Nuclear Security Administration. We do that through the Advanced Simulation computing program. Its goal is to provide that computing power to make sure that the US nuclear rep of the stockpile is safe, secure, and effective. So how we go about doing that? There's a lot of work involved. We have multiple platform lines that we accomplish that goal with. One of them is the advanced technology systems. Those are the ones you've heard about a lot, they're pushing towards exit scale, the GPU technologies incorporated into those. We also have a second line, a platform line, called the Commodity Technology Systems. That's where right now we're partnering with Dell on the latest generation of those. Those systems are a little more conservative, they're right now CPU only driven but they're also intended to be the everyday work horses. So those are the first systems our users get on. It's very easy for them to get their applications up and running. They're the first things they use usually on a day to day basis. They run a lot of small to medium size jobs that you need to do to figure out how to most effectively use what workloads you need to move to the even larger systems to accomplish our mission goals. >> The workhorses. >> Yeah. >> What have you seen here these last few days of the show, what excites you? What are the most interesting things you've seen? >> There's all kinds of things that are interesting. Probably most interesting ones I can't talk about in public, unfortunately, 'cause of NDA agreements, of course. But it's always exciting to be here at Supercomputing. It's always exciting to see the products that we've been working with industry and co-designing with them on for, you know, several years before the public actually sees them. That's always an exciting part of the conference as well specifically with CTS-2, it's exciting. As was mentioned before, I've been working with Dell for nearly two years on this, but the systems first started being delivered this past August. And so we're just taking the initial deliveries of those. We've deployed, you know, roughly about 1600 nodes now but that'll ramp up to over 6,000 nodes over the next three or four months. >> So how does this work intersect with Sandia and Los Alamos? Explain to us the relationship there. >> Right, so those three laboratories are the laboratories under the National Nuclear Security Administration. We partner together on CTS. So the architectures, as you were asking, how do we define these things, it's the labs coming together. Those three laboratories we define what we need for that architecture. We have a joint procurement that is run out of Livermore but then the systems are deployed at all three laboratories. And then they serve the programs that I mentioned for each laboratory as well. >> I've worked in this space for a very long time you know I've worked with agencies where the closest I got to anything they were actually doing was the sort of guest suite outside the secure area. And sometimes there are challenges when you're communicating, it's like you have a partner like Dell who has all of these things to offer, all of these ideas. You have requirements, but maybe you can't share 100% of what you need to do. How do you navigate that? Who makes the decision about what can be revealed in these conversations? You talk about NDA in terms of what's been shared with you, you may be limited in terms of what you can share with vendors. Does that cause inefficiency? >> To some degree. I mean, we do a good job within the NSA of understanding what our applications need and then mapping that to technical requirements that we can talk about with vendors. We also have kind of in between that we've done this for many years. A recent example is of course with the exit scale computing program and some things it's doing creating proxy apps or mini apps that are smaller versions of some of the things that we are important to us. Some application areas are important to us, hydrodynamics, material science, things like that. And so we can collaborate with vendors on those proxy apps to co-design systems and tweak the architectures. In fact, we've done a little bit that with CTS-2, not as much in CTS as maybe in the ATS platforms but that kind of general idea of how we collaborate through these proxy applications is something we've used across platforms. >> Now is Dell one of your co-design partners? >> In CTS-2 absolutely, yep. >> And how, what aspects of CTS-2 are you working on with Dell? >> Well, the architecture itself was the first, you know thing we worked with them on, we had a procurement come out, you know they bid an architecture on that. We had worked with them, you know but previously on our requirements, understanding what our requirements are. But that architecture today is based on the fourth generation Intel Xeon that you've heard a lot about at the conference. We are one of the first customers to get those systems in. All the systems are interconnected together with the Cornell Network's Omni-Path Network that we've used before and are very excited about as well. And we build up from there. The systems get integrated in by the operations teams at the laboratory. They get integrated into our production computing environment. Dell is really responsible, you know for designing these systems and delivering to the laboratories. The laboratories then work with Dell. We have a software stack that we provide on top of that called TOSS, for Tri-Lab Operating System. It's based on Redhead Enterprise Linux. But the goal there is that it allows us, a common user environment, a common simulation environment across not only CTS-2, but maybe older systems we have and even the larger systems that we'll be deploying as well. So from a user perspective they see a common user interface, a common environment across all the different platforms that they use at Livermore and the other laboratories. >> And Armando, what does Dell get out of the co-design arrangement with the lab? >> Well, we get to make sure that they're successful. But the other big thing that we want to do, is typically when you think about Dell and HPC, a lot of people don't make that connection together. And so what we're trying to do is make sure that, you know they know that, hey, whether you're a work group customer at the smallest end or a super computer customer at the highest end, Dell wants to make sure that we have the right setup portfolio to match any needs across this. But what we were really excited about this, this is kind of our, you know big CTS-2 first thing we've done together. And so, you know, hopefully this has been successful. We've made Matt happy and we look forward to the future what we can do with bigger and bigger things. >> So will the labs be okay with Dell coming up with a marketing campaign that said something like, "We can't confirm that alien technology is being reverse engineered." >> Yeah, that would fly. >> I mean that would be right, right? And I have to ask you the question directly and the way you can answer it is by smiling like you're thinking, what a stupid question. Are you reverse engineering alien technology at the labs? >> Yeah, you'd have to suck the PR office. >> Okay, okay. (all laughing) >> Good answer. >> No, but it is fascinating because to a degree it's like you could say, yeah, we're working together but if you really want to dig into it, it's like, "Well I kind of can't tell you exactly how some of this stuff is." Do you consider anything that you do from a technology perspective, not what you're doing with it, but the actual stack, do you try to design proprietary things into the stack or do you say, "No, no, no, we're going to go with standards and then what we do with it is proprietary and secret."? >> Yeah, it's more the latter. >> Is the latter? Yeah, yeah, yeah. So you're not going to try to reverse engineer the industry? >> No, no. We want the solutions that we develop to enhance the industry to be able to apply to a broader market so that we can, you know, gain from the volume of that market, the lower cost that they would enable, right? If we go off and develop more and more customized solutions that can be extraordinarily expensive. And so we we're really looking to leverage the wider market, but do what we can to influence that, to develop key technologies that we and others need that can enable us in the high forms computing space. >> We were talking with Satish Iyer from Dell earlier about validated designs, Dell's reference designs for for pharma and for manufacturing, in HPC are you seeing that HPC, Armando, and is coming together traditionally and more of an academic research discipline beginning to come together with commercial applications? And are these two markets beginning to blend? >> Yeah, I mean so here's what's happening, is you have this convergence of HPC, AI and data analytics. And so when you have that combination of those three workloads they're applicable across many vertical markets, right? Whether it's financial services, whether it's life science, government and research. But what's interesting, and Matt won't brag about, but a lot of stuff that happens in the DoE labs trickles down to the enterprise space, trickles down to the commercial space because these guys know how to do it at scale, they know how to do it efficiently and they know how to hit the mark. And so a lot of customers say, "Hey we want what CTS-2 does," right? And so it's very interesting. The way I love it is their process the way they do the RFP process. Matt talked about the benchmarks and helping us understand, hey here's kind of the mark you have to hit. And then at the same time, you know if we make them successful then obviously it's better for all of us, right? You know, I want to secure nuclear stock pile so I hope everybody else does as well. >> The software stack you mentioned, I think Tia? >> TOSS. >> TOSS. >> Yeah. >> How did that come about? Why did you feel the need to develop your own software stack? >> It originated back, you know, even 20 years ago when we first started building Linux clusters when that was a crazy idea. Livermore and other laboratories were really the first to start doing that and then push them to larger and larger scales. And it was key to have Linux running on that at the time. And so we had the. >> So 20 years ago you knew you wanted to run on Linux? >> Was 20 years ago, yeah, yeah. And we started doing that but we needed a way to have a version of Linux that we could partner with someone on that would do, you know, the support, you know, just like you get from an EoS vendor, right? Security support and other things. But then layer on top of that, all the HPC stuff you need either to run the system, to set up the system, to support our user base. And that evolved into to TOSS which is the Tri-Lab Operating System. Now it's based on the latest version of Redhead Enterprise Linux, as I mentioned before, with all the other HPC magic, so to speak and all that HPC magic is open source things. It's not stuff, it may be things that we develop but it's nothing closed source. So all that's there we run it across all these different environments as I mentioned before. And it really originated back in the early days of, you know, Beowulf clusters, Linux clusters, as just needing something that we can use to run on multiple systems and start creating that common environment at Livermore and then eventually the other laboratories. >> How is a company like Dell, able to benefit from the open source work that's coming out of the labs? >> Well, when you look at the open source, I mean open source is good for everybody, right? Because if you make a open source tool available then people start essentially using that tool. And so if we can make that open source tool more robust and get more people using it, it gets more enterprise ready. And so with that, you know, we're all about open source we're all about standards and really about raising all boats 'cause that's what open source is all about. >> And with that, we are out of time. This is our 28th interview of SC22 and you're taking us out on a high note. Armando Acosta, director of HPC Solutions at Dell. Matt Leininger, HPC Strategist, Lawrence Livermore National Laboratories. Great discussion. Hopefully it was a good show for you. Fascinating show for us and thanks for being with us today. >> Thank you very much. >> Thank you for having us >> Dave it's been a pleasure. >> Absolutely. >> Hope we'll be back next year. >> Can't believe, went by fast. Absolutely at SC23. >> We hope you'll be back next year. This is Paul Gillin. That's a wrap, with Dave Nicholson for theCUBE. See here in next time. (soft upbear music)

Published Date : Nov 17 2022

SUMMARY :

And I have to say Dave You have all of the academics and is always on the leading edge about the juxtaposition of the two of you. And so the beauty of this project How do you decide what you need that you need to do but the systems first Explain to us the relationship there. So the architectures, as you were asking, 100% of what you need to do. And so we can collaborate with and the other laboratories. And so, you know, hopefully that said something like, And I have to ask you and then what we do with it reverse engineer the industry? so that we can, you know, gain And so when you have that combination running on that at the time. all the HPC stuff you need And so with that, you know, and thanks for being with us today. Absolutely at SC23. with Dave Nicholson for theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt LeiningerPERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

Paul GillinPERSON

0.99+

National Nuclear Security AdministrationORGANIZATION

0.99+

Armando AcostaPERSON

0.99+

Cornell NetworkORGANIZATION

0.99+

DellORGANIZATION

0.99+

MattPERSON

0.99+

CTS-2TITLE

0.99+

US Department of EnergyORGANIZATION

0.99+

DavePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

two yearsQUANTITY

0.99+

next yearDATE

0.99+

Lawrence LivermoreORGANIZATION

0.99+

100%QUANTITY

0.99+

CTSTITLE

0.99+

Dell TechnologiesORGANIZATION

0.99+

PaulPERSON

0.99+

LinuxTITLE

0.99+

NASAORGANIZATION

0.99+

HPC SolutionsORGANIZATION

0.99+

bothQUANTITY

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

todayDATE

0.99+

Los AlamosORGANIZATION

0.99+

OneQUANTITY

0.99+

Lawrence Livermore National LaboratoryORGANIZATION

0.99+

ArmandoORGANIZATION

0.99+

each laboratoryQUANTITY

0.99+

second lineQUANTITY

0.99+

over 6,000 nodesQUANTITY

0.99+

20 years agoDATE

0.98+

three laboratoriesQUANTITY

0.98+

28th interviewQUANTITY

0.98+

Lawrence Livermore National LaboratoriesORGANIZATION

0.98+

threeQUANTITY

0.98+

firstQUANTITY

0.98+

Tri-LabORGANIZATION

0.98+

SandiaORGANIZATION

0.97+

oneQUANTITY

0.97+

FirstQUANTITY

0.97+

two marketsQUANTITY

0.97+

SupercomputingORGANIZATION

0.96+

first systemsQUANTITY

0.96+

fourth generationQUANTITY

0.96+

this weekDATE

0.96+

LivermoreORGANIZATION

0.96+

Omni-Path NetworkORGANIZATION

0.95+

about 1600 nodesQUANTITY

0.95+

Lawrence Livermore National LaboratoryORGANIZATION

0.94+

LLNLORGANIZATION

0.93+

NDAORGANIZATION

0.93+

theCUBE Previews Supercomputing 22


 

(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)

Published Date : Oct 25 2022

SUMMARY :

And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Danny HillisPERSON

0.99+

Steve ChenPERSON

0.99+

NECORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Steve WallachPERSON

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

NASAORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Steve FrankPERSON

0.99+

NvidiaORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Seymour CrayPERSON

0.99+

John FurrierPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

UnisysORGANIZATION

0.99+

1997DATE

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

EUORGANIZATION

0.99+

Controlled Data CorporationsORGANIZATION

0.99+

IntelORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Penguin SolutionsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TuesdayDATE

0.99+

siliconangle.comOTHER

0.99+

AMDORGANIZATION

0.99+

21st centuryDATE

0.99+

iPhone 12COMMERCIAL_ITEM

0.99+

10QUANTITY

0.99+

CrayPERSON

0.99+

one terabyteQUANTITY

0.99+

CDCORGANIZATION

0.99+

thecube.netOTHER

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

iPhone 14COMMERCIAL_ITEM

0.99+

john@siliconangle.comOTHER

0.99+

$2 millionQUANTITY

0.99+

November 13thDATE

0.99+

firstQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

TodayDATE

0.99+

more than half a billion dollarsQUANTITY

0.99+

20QUANTITY

0.99+

seven peopleQUANTITY

0.99+

hundredsQUANTITY

0.99+

mid 1960sDATE

0.99+

three daysQUANTITY

0.99+

ConvexORGANIZATION

0.99+

70'sDATE

0.99+

SC22EVENT

0.99+

david.vellante@siliconangle.comOTHER

0.99+

late 80'sDATE

0.98+

80'sDATE

0.98+

ES7000COMMERCIAL_ITEM

0.98+

todayDATE

0.98+

almost $2 millionQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

20 years laterDATE

0.98+

tens of millions of dollarsQUANTITY

0.98+

SundayDATE

0.98+

JapaneseOTHER

0.98+

90'sDATE

0.97+

Keynote | Red Hat Summit 2019 | DAY 2 Morning


 

>> Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Paul Cormier. Boring. >> Welcome back to Boston. Welcome back. And welcome back after a great night last night of our opening with with Jim and talking to certainly saw ten Jenny and and especially our customers. It was so great last night to hear our customers in how they set their their goals and how they met their goals. All possible because certainly with a little help from red hat, but all possible because of because of open source. And, you know, sometimes we have to all due that has set goals. And I'm going to talk this morning about what we as a company and with community, have set for our goals along the way. And sometimes you have to do that. You know, audacious goals. It can really change the perception of what's even possible. And, you know, if I look back, I can't think of anything, at least in my lifetime, that's more important. Or such a big golden John F. Kennedy setting the gold to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about in compare and contrast, a little bit of where we are technically at that time, you know, tto win and to beat and winning the space race and even get into the space race. There was some really big technical challenges along the way. I mean, believe it or not. Not that long ago. But even But back then, math Malik mathematical calculations were being shifted from from brilliant people who we trusted, and you could look in the eye to A to a computer that was programmed with the results that were mostly printed out. This this is a time where the potential of computers was just really coming on the scene and, at the time, the space race at the time of space race it. It revolved around an IBM seventy ninety, which was one of the first transistor based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges And while we had the goal of in the beginning of the technique and the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case. Back in nineteen sixty nine, the seven individuals that made up the Mercury Space crew were putting their their lives in the hands of those first computers. But on Sunday, July twentieth, nineteen sixty nine, these things all came together. The goal, the technology in the team and a human being walked on the moon. You know, if this was possible fifty years ago, just think about what Khun B. Accomplished today, where technology is part of our everyday lives. And with technology advances at an ever increasing rate, it's hard to comprehend the potential that sitting right at our fingertips every single day, everything you know about computing is continuing to change. Today, let's look a bit it back. A computing In nineteen sixty nine, the IBM seventy ninety could process one hundred thousand floating point operations per second, today's Xbox one that sitting in most of your living rooms probably can process six trillion flops. That's sixty million times more powerful than the original seventy ninety that helped put a human being on the moon. And at the same time that computing was, that was drastically changed. That this computing has drastically changed. So have the boundaries of where that computing sits and where it's been where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center, and over time that grew to multiple data centers. Then with cloud, it extended all the way out to data centers that you didn't even own or have control of. But but computing now reaches far beyond any data center. This is also referred to as the edge. You hear a lot about that. The Apollo's, the Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed seventy pounds embedded in the capsule. Today, today the edge is right here on my wrist. This apple watch weighs just a couple of ounces, and it's ten ten thousand times more powerful than that seventy ninety back in nineteen sixty nine But even more impactful than computing advances, combined with the pervasive availability of it, are the changes and who in what controls those that similar to social changes that have happened along the way. Shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms. Operational control was your team, your team within your control? In some cases, a single person managed everything. But as complexity grows, our team's expanded, just like in the just like in the computing boundaries, system integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions going forward with the progress of things like a I and software defined everything. It's quite likely that machines will be managing machines, and in many cases that's already happening today. But while the technology at our finger tips today is so impressive, the pace of changing complexity of the problems we aspire to solve our equally hard to comprehend and they are all intertwined with one another learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity beyond anyone beyond what any one single company or even one single country Khun solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open sources so needed today, even in the world, to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction. Today is to bring two brother to bring together the best innovation from every corner of the planet. Toe fundamentally change how we solve problems. This approach and access the innovation is what has enabled open source To tackle The challenge is big challenges, like creating the hybrid cloud like building a truly open hybrid cloud. But even today it's really difficult to bridge the gap of the innovation. It's available in all in all of our fingertips by open source development, while providing the production level capabilities that are needed to really dip, ploy this in the enterprise and solve RIA world business problems. Red Hat has been committed to open source from the very, very beginning and bringing it to solve enterprise class problems for the last seventeen plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway tow harness the innovation. We had to fully embrace the model. We made a decision very early on. Give everything back and we live by that every single day. We didn't do crazy crazy things like you hear so many do out there. All this is open corps or everything below. The line is open and everything above the line is closed. We didn't do that, and we gave everything back Everything we learned in the process of becoming an enterprise class technology company. We gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible within open source development model we've been building on the foundation of open source is most successful Project Lennox in the architecture of the future hybrid and bringing them to the Enterprise. This is what made Red Hat, the company that we are today and red hats journey. But we also had the set goals, and and many of them seemed insert insurmountable at the time, the first of which was making Lennox the Enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the Enterprise was rail two dot one. Yes, I know we two dot one, but we knew we couldn't release a one dato product. We knew that and and we didn't. But >> we didn't want to >> allow any reason why anyone of any customer anyone shouldn't should look past rail to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and Big Eyes v partners that supported Rehl out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our on our priorities. In the early days of rail, I remember going to ask one of our engineers for a past rehl build because we were having a customer issue on it on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it here It is told me not to worry that the build this was he thinks this was the bill. This was the right one, and at that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. The not only convinced the world that Lennox was secure, stable, an enterprise ready, but also to make that a reality. But we did. And today this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise, Lennox has continually risen to the challenge and has become the core foundation that many mission critical customers run and bet their business on. And an even bigger today Lennox is the foundation of which practically every single technology initiative is built upon. Lennox is not only standard toe build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. We started our story with rail two dot one, and here we are today, seventeen years later, announcing rally as we did as we did last night. It's specifically designed for applications to run across the open hybrid. Clyde Cloud. Railed has become the best operating simp system for on premise all the way out to the cloud, providing that common operating model and workload foundation on which to build hybrid applications. Let's take it. Let's take a look at how far we've come and see this in action. >> Please welcome Red Hat Global director of developer experience, burst Sutter with Josh Boyer, Timothy Kramer, Lars Carl, it's Key and Brent Midwood. All right, we have some amazing things to show you. In just a few short moments, we actually have a lot of things to show you. And actually, Tim and Brandt will be with us momentarily. They're working out a few things in the back because we have a lot of this is gonna be a live demonstration, some incredible capabilities. Now you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vast cities. You're free to manage many, many machines. I want you thinking about that as we go to this process. Now, also, keep in mind that this is the basis our core platform for everything we do here. Red hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize the many of you in the audience right now. Her hand's on systems administrators, systems, architect, citizens, engineers. And we know that you're under ever growing pressure to deliver needed infrastructure. Resource is ever faster, and that is a key element to what you're thinking about every day. Well, this has been a core theme, and our design decisions find red Odd Enterprise Lennox eight and intelligent operating system, which is making it fundamentally easier for you manage machines that scale. So hold what you're about to see next. Feels like a new superpower and and that redhead azure force multiplier. So first, let me introduce you to a large. He's totally my limits guru. >> I wouldn't call myself a girl, but I I guess you could say that I want to bring Lennox and light meant to more people. >> Okay, Well, let's let's dive in. And we're not about the clinic's eight. >> Sure. Let me go. And Morgan, >> wait a >> second. There's windows. >> Yeah, way Build the weft Consul into Really? That means that for the first time, you can log in from any device including your phone or this standard windows laptop. So you just go ahead and and to my Saturday lance credentials here. >> Okay, so now >> you're putting >> your limits password and over the web. >> Yeah, that might sound a bit scary at first, but of course, we're using the latest security tech by T. L s on dh csp on. Because that's the standard Lennox off site. You can use everything that you used to like a stage keys, OTP, tokens and stuff like this. >> Okay, so now I see the council right here. I love the dashboard overview of the system, but what else can you tell us about this council? >> Right? Like right here. You see the load of the system, some some of its properties. But you can also dive into logs everything that you're used to from the command line, right? Or lookit, services. This's all the services I've running, can start and stuff them and enable >> OK, I love that feature right there. So what about if I have to add a whole new application to this environment? >> Good that you're bringing that up. We build a new future into hell called application streams. Which the way for you to install different versions of your half stack that are supported I'LL show you with Youngmin a command line. But since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the Web console Since the browser, I can even make this a bit bigger. Go to, for example, to see the application streams that we have for Poskus. Ijust do module list and I see you know we have ten and nine dot six Both supported tennis a default on defy enable ninety six Now the next time that I installed prescribes it will pull all their lady towards from them at six. >> Ok, so this is very cool. I see two verses of post Chris right here What tennis to default. That is fantastic and the application streams making that happen. But I'm really kind of curious, right? I loved using know js and Java. So what about multiple versions of those? >> Yeah, that's exactly the idea way. Want to keep up with the fast moving ecosystems off programming language? Isn't it a business? >> Okay, now, But I have another key question. I know some people were thinking it right now. What about Python? >> Yeah. In fact, in a minimum and still like this, python gives you command. Not fact. Just have to type it correctly. You can't just install which everyone you want two or three or whichever your application needs. >> Okay, Well, that is I've been burned on that one before. Okay, so no actual. Have a confession for all you guys. Right here. You guys keep this amongst yourselves. Don't let Paul No, I'm actually not a linnet systems administrator. I'm an application developer, an application architect, And I recently had to go figure out how to extend the file system. This is for real. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, extend resized to f s. And I have to admit, that's hard, >> right? I've opened the storage space for you right here, where you see an overview of your storage. And the council has made for people like you as well not only for people that I knew that when you two lunatics, right? It's if you're running, you're running some of the commands only, you know, some of the time you don't remember them. So, for example, I haven't felt twosome here. That's a little bit too small. Let me just throw it. It's like, you know, dragging this lighter. It calls all the command in the background for you. >> Oh, that is incredible. Is that simple? Just drag and drop. That is fantastic. Well, so I actually, you know, we'll have another question for you. It looks like now this linen systems administration is no longer a dark heart involving arcane commands typed into a black terminal. Like using when those funky ergonomic keyboards you know I'm talking about right? Do >> you know a lot of people, including me and people in the audience like that dark out right? And this is not taking any of that away. It's on additional tool to bring limits to more people. >> Okay, well, that is absolute fantastic. Thank you so much for that Large. And I really love him installing everything is so much easier, including a post gra seeker and, of course, the python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Lenox system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So, Josh, how would I create a golden image? One VM image that can use and we have everything pre baked in? >> Yeah, absolutely. But >> we get that question all the time. So really includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate new system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints or what we used to actually control it goes into our golden image. Uh, and I heard you and Lars talking about post present python. So I went and started typing here. So it brings us to this page, but you could go to the selected components, and you can see here I've created a blueprint that has all the python and post press packages in it. Ah, and the interesting thing about this is it build on our existing kickstart technology. But you can use it to deploy that whatever cloud you want. And it's saved so that you don't actually have to know all the various incantations from Amazon toe azure to Google, whatever it's all baked in on. When you do this, you can actually see the dependencies that get brought in as well. Okay. Should we create one life? Yes, please. All right, cool. So if we go back to the blueprints page and we click create blueprint Let's, uh let's make a developer brute blueprint here. So we click great, and you can see here on the left hand side. I've got all of my content served up by Red Hat satellite. We have a lot of great stuff, and really, But we can go ahead and search. So we'LL look for post grows and you know, it's a developer image at the client for some local testing. Um, well, come in here and at the python bits. Probably the development package. We need a compiler if we're going to actually build anything. So look for GCC here and hey, what's your favorite editor? >> A Max, Of course, >> Max. All right. Hey, Lars, about you. I'm more of a person. You Maxim v I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. But we're going to go ahead and Adam Ball, sweetie, I'm a fight on stage. So wait, just point and click. Let the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. >> Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily deploys of deploy this across multiple cloud providers. And as well as this on stage are where we have right now. >> Yeah, absolutely. We can to play on Amazon as your google any any infrastructure you're looking for so you can really hit your Clyburn hybrid cloud operating system images. >> Okay. All right, listen, we >> just go on, click, create image. Uh, we can select our different types here. I'm gonna go ahead and create a local VM because it's available image, and maybe they want to pass it around or whatever, and I just need a few moments for it to build. >> Okay? So while that's taking a few moments, I know there's another key question in the minds of the audience right now, and you're probably thinking I love what I see. What Right eye right hand Priceline say. But >> what does it >> take to upgrade from seven to eight? So large can you show us and walk us through an upgrade? >> Sure, this's my little Thomas Block that I set up. It's powered by what Chris and secrets over, but it's still running on seven six. So let's upgrade that jump over to my house fee on satellite on. You see all my relate machines here, including the one I showed you what Consul on before. And there is that one with my sun block and there's a couple others. Let me select those as well. This one on that one. Just go up here. Schedule remote job. And she was really great. And hit Submit. I made it so that it makes the booms national before. So if anything was wrong Kans throwback! >> Okay, okay, so now it's progressing. Here, >> it's progressing. Looks like it's running. Doing >> live upgrade on stage. Uh, >> seems like one is failing. What's going on here? Okay, we checked the tree of great Chuck. Oh, yeah, that's the one I was playing around with Butter fest backstage. What? Detective that and you know, it doesn't run the Afghan cause we don't support operating that. >> Okay, so what I'm hearing now? So the good news is, we were protected from possible failed upgrade there, So it sounds like these upgrades are perfectly safe. Aiken, basically, you know, schedule this during a maintenance window and still get some sleep. >> Totally. That's the idea. >> Okay, fantastic. All right. So it looks like upgrades are easy and perfectly safe. And I really love what you showed us there. It's good point. Click operation right from satellite. Ok, so Well, you know, we were checking out upgrades. I want to know Josh. How those v ems coming along. >> They went really well. So you were away for so long. I got a little bored and I took some liberties. >> What do you mean? >> Well, the image Bill And, you know, I decided I'm going to go ahead and deploy here to this Intel machine on stage Esso. I have that up and running in the web. Counsel. I built another one on the arm box, which is actually pretty fast, and that's up and running on this. Our machine on that went so well that I decided to spend up some an Amazon. So I've got a few instances here running an Amazon with the web console accessible there as well. On even more of our pre bill image is up and running an azure with the web console there. So the really cool thing about this bird is that all of these images were built with image builder in a single location, controlling all the content that you want in your golden images deployed across the hybrid cloud. >> Wow, that is fantastic. And you might think that so we actually have more to show you. So thank you so much for that large. And Josh, that is fantastic. Looks like provisioning bread. Enterprise Clinic Systems ate a redhead. Enterprise Enterprise. Rhetta Enterprise Lennox. Eight Systems is Asian ever before, but >> we have >> more to talk to you about. And there's one thing that many of the operations professionals in this room right now no, that provisioning of'em is easy, but it's really day two day three, it's down the road that those viens required day to day maintenance. As a matter of fact, several you folks right now in this audience to have to manage hundreds, if not thousands, of virtual machines I recently spoke to. Gentleman has to manage thirteen hundred servers. So how do you manage those machines? A great scale. So great that they have now joined us is that it looks like they worked things out. So now I'm curious, Tim. How will we manage hundreds, if not thousands, of computers? >> Welbourne, one human managing hundreds or even thousands of'em says, No problem, because we have Ansel automation. And by leveraging Ansel's integration into satellite, not only can we spin up those V em's really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory and his red hat is publishing patches. Weaken with that danceable integration easily apply those patches across our entire fleet of machines. Okay, >> that is fantastic. So he's all the machines can get updated in one fell swoop. >> He sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud that red hat dot com And here, a cloud that redhead dot com You can view and manage your entire inventory no matter where it sits. Of Redhead Enterprise Lennox like on Prem on stage. Private Cloud or Public Cloud. It's true Hybrid cloud management. >> OK, but one thing. One thing. I know that in the minds of the audience right now. And if you have to manage a large number servers this it comes up again and again. What happens when you have those critical vulnerabilities that next zero day CV could be tomorrow? >> Exactly. I've actually been waiting for a while patiently for you >> to get to the really good stuff. So >> there's one more thing that I wanted to let folks know about. Red Hat Enterprise. The >> next eight and some features that we have there. Oh, >> yeah? What is that? >> So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our red hat Enterprise Lennox servers running securely, inefficiently. And so what we actually have here is a few things that we could take a look at show folks what that is. >> OK, so we basically have this new feature. We're going to show people right now. And so one thing I want to make sure it's absolutely included within the redhead enterprise in that state. >> Yes. Oh, that's Ah, that's an announcement that we're making this week is that this is a brand new feature that's integrated with Red Hat Enterprise clinics, and it's available to everybody that has a red hat enterprise like subscription. So >> I believe everyone in this room right now has a rail subscriptions, so it's available to all of them. >> Absolutely, absolutely. So let's take a quick look and try this out. So we actually have. Here is a list of about six hundred rules. They're configuration security and performance rules. And this is this list is growing every single day, so customers can actually opt in to the rules that are most that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt into sending us. So customers have opted in and are sending us more data every single night. Then they actually have in total over the last twenty years via any other mechanism. >> Now there's I see now there's some critical findings. That's what I was talking about. But it comes to CVS and things that nature. >> Yeah, I'm betting that those air probably some of the rail seven boxes that we haven't actually upgraded quite yet. So we get back to that. What? I'd really like to show everybody here because everybody has access to this is how easy it is to opt in and enable this feature for real. Okay, let's do that real quick, so I gotta hop back over to satellite here. This is the satellite that we saw before, and I'll grab one of the hosts and we can use the new Web console feature that's part of Railly, and via single sign on I could jump right from satellite over to the Web console. So it's really, really easy. And I'LL grab a terminal here and registering with insights is really, really easy. Is one command troops, and what's happening right now is the box is going to gather some data. It's going to send it up to the cloud, and within just a minute or two, we're gonna have some results that we can look at back on the Web interface. >> I love it so it's just a single command and you're ready to register this box right now. That is super easy. Well, that's fantastic, >> Brent. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Lennox eight was the easiest, most economical and smartest operating system on the planet, period. And well, I think it's cute how you can go ahead and captain on a single machine. I'm going to show you one more thing. This is Answerable Tower. You can use as a bell tower to managing govern your answerable playbook, usage across your entire organization and with this. What I could do is on every single VM that was spun up here today. Opt in and register insights with a single click of a button. >> Okay, I want to see that right now. I know everyone's waiting for it as well, But hey, you're VM is ready. Josh. Lars? >> Yeah. My clock is running a little late now. Yeah, insights is a really cool feature >> of rail. And I've got it in all my images already. All >> right, I'm doing it all right. And so as this playbook runs across the inventory, I can see the machines registering on cloud that redhead dot com ready to be managed. >> OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, fantastic. >> That's awesome. Thanks to him. Nothing better than a Red Hat Summit speaker in the first live demo going off script deal. Uh, let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular deanna's mask issue. It's going to affect a couple of machines. We saw that in the overview, and I can actually go and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood an impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use answerable to mediate this so I can grab the machines will select those two and we're mediate with answerable. I can create a new playbook. It's our maintenance window, but we'LL do something along the lines of like stuff Tim broke and that'LL be our cause. We name it whatever we want. So we'Ll create that playbook and take a look at it, and it's actually going to give us some details about the machines. You know what, what type of reboots Efendi you're going to be needed and what we need here. So we'LL go ahead and execute the playbook and what you're going to see is the outputs goingto happen in real time. So this is happening from the cloud were affecting machines. No matter where they are, they could be on Prem. They could be in a hybrid cloud, a public cloud or in a private cloud. And these things are gonna be remediated very, very easily with answerable. So it's really, really awesome. Everybody here with a red hat. Enterprise licks Lennox subscription has access to this now, so I >> kind of want >> everybody to go try this like, we really need to get this thing going and try it out right now. But >> don't know, sent about the room just yet. You get stay here >> for okay, Mr. Excitability, I think after this keynote, come back to the red hat booth and there's an optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out. Answer any questions you might have. So >> this is really the start of a new era with an intelligent operating system and beauty with intelligence you just saw right now what insights that troubles you. Fantastic. So we're enabling systems administrators to manage more red in private clinics, a greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point, and we kind of, you know, when a little bit sideways here moments. But we need to get off the stage. But there's one thing I want you guys to think about it. All right? Do come check out the in the booth. Like Tim just said also in our debs, Get hands on red and a prize winning state as well. But really, I want you to think about this one human and a multitude of servers. And if you remember that one thing asked you upfront. Do you feel like you get a new superpower and redhead? Is your force multiplier? All right, well, thank you so much. Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on stage. >> I went brilliant. No, it's just as always, >> amazing. I mean, as you can tell from last night were really, really proud of relate in that coming out here at the summit. And what a great way to showcase it. Thanks so much to you. Birth. Thanks, Brent. Tim, Lars and Josh. Just thanks again. So you've just seen this team demonstrate how impactful rail Khun b on your data center. So hopefully hopefully many of you. If not all of you have experienced that as well. But it was super computers. We hear about that all the time, as I just told you a few minutes ago, Lennox isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest super computers in the world. In our next guest is here to tell us a lot more about that. >> Please welcome Lawrence Livermore National Laboratory. HPC solution Architect Robin Goldstone. >> Thank you so much, Robin. >> So welcome. Welcome to the summit. Welcome to Boston. And thank thank you so much for coming for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high high performance computing really works at this level? >> Sure. So Lawrence Livermore National >> Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science and technology and high performance computing has always been one of our core capabilities. In fact, our very first supercomputer, ah Univac one was ordered by Edward Teller before our lab even opened back in nineteen fifty two. Our mission has evolved since then to cover a broad range of national security challenges. But first and foremost, our job is to ensure the safety, security and reliability of the nation's nuclear weapons stockpile. Oh, since the US no longer performs underground nuclear testing, our ability to certify the stockpile depends heavily on science based science space methods. We rely on H P C to simulate the behavior of complex weapons systems to ensure that they can function as expected, well beyond their intended life spans. That's actually great. >> So are you really are still running on that on that Univac? >> No, Actually, we we've moved on since then. So Sierra is Lawrence Livermore. Its latest and greatest supercomputer is currently the Seconds spastic supercomputer in the world and for the geeks in the audience, I think there's a few of them out there. We put up some of the specs of Syrah on the screen behind me, a couple of things worth highlighting our Sierra's peak performance and its power utilisation. So one hundred twenty five Pata flops of performance is equivalent to about twenty thousand of those Xbox one excess that you mentioned earlier and eleven point six megawatts of power required Operate Sierra is enough to power around eleven thousand homes. Syria is a very large and complex system, but underneath it all, it starts out as a collection of servers running Lin IX and more specifically, rail. >> So did Lawrence. Did Lawrence Livermore National Lab National Lab used Yisrael before >> Sierra? Oh, yeah, most definitely. So we've been running rail for a very long time on what I'll call our mid range HPC systems. So these clusters, built from commodity components, are sort of the bread and butter of our computer center. And running rail on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware. Also between Lawrence Livermore in our sister labs, Los Alamos and Sandia. Alongside these commodity clusters, though, we've always had one sort of world class supercomputer like Sierra. Historically, these systems have been built for a sort of exotic proprietary hardware running entirely closed source operating systems. Anytime something broke, which was often the Vander would be on the hook to fix it. And you know, >> that sounds >> like a good model, except that what we found overtime is most the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified codes. So their ability to reproduce our problem was was pretty limited. In some cases, they've even sent an engineer on site to try to reproduce our problems. But even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. >> So for many of us, for many of us, the challenges is one of driving reasons for open source, you know, for even open source existing. How has how did Sierra change? Things are on open source for >> you. Sure. So when we developed our technical requirements for Sierra, we had an explicit requirement that we want to run an open source operating system and a strong preference for rail. At the time, IBM was working with red hat toe add support Terrell for their new little Indian power architecture. So it was really just natural for them to bid a red. A rail bay system for Sierra running Raylan Cyril allows us to leverage the model that's worked so well for us for all this time on our commodity clusters any packages that we build for X eighty six, we can now build those packages for power as well as our market texture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in house colonel developers to help debug complex problems are sys. Admin is Khun now work on any of our systems, including Sierra, without having toe pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo es fenders. >> You know, you've been able, you've been able to extend your foundation from all the way from X eighty six all all the way to the extract excess Excuse scale supercomputing. We talk about giving customers all we talked about it all the time. A standard operational foundation to build upon. This isn't This isn't exactly what we've envisioned. So So what's next for you >> guys? Right. So what's next? So Sierra's just now going into production. But even so, we're already working on the contract for our next supercomputer called El Capitan. That's scheduled to be delivered the Lawrence Livermore in the twenty twenty two twenty timeframe. El Capitan is expected to be about ten times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation. That relish provided us for well over a decade. >> Well, thank you so much for your support of realm over the years, Robin. And And thank you so much for coming and tell us about it today. And we can't wait to hear more about El Capitan. Thank you. Thank you very much. So now you know why we're so proud of realm. And while you saw confetti cannons and T shirt cannons last night, um, so you know, as as burned the team talked about the demo rail is the force multiplier for servers. We've made Lennox one of the most powerful platforms in the history of platforms. But just as Lennox has become a viable platform with access for everyone, and rail has become viable, more viable every day in the enterprise open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with Ralph seeing the incredible progress of software development occurring around Lennox. Let's let's lead us to the next goal that we said tow, tow ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many? How many of you out here in the audience or are Cesar are? HC sees how many out there a lot. A lot. You are the people that our building the next generation of computing the hybrid cloud, you know, again with like just like our goals around Lennox. This goals might seem a little daunting in the beginning, but as a community we've proved it time and time again. We are unstoppable. Let's talk a bit about what got us to the point we're at right right now and in the work that, as always, we still have in front of us. We've been on a decade long mission on this. Believe it or not, this mission was to build the capabilities needed around the Lenox operating system to really build and make the hybrid cloud. When we saw well, first taking hold in the enterprise, we knew that was just taking the first step. Because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build, to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of J. Boss many years ago, all the way to the new Cuban Eddie's native code ready workspaces we launched just a few months back. We realized very early on that building a developer friendly platform was critical to the success of Lennox and open source in the enterprise. Shortly after this, the public cloud stormed onto the scene while our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. Rehl very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely re sought and re architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running in how they were running, regardless of which infrastructure provider they they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms to cloud forms, insights and now answerable. We focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked toe applications. Enterprise class applications tend to create tons of data and to have a common operating platform foyer applications. You need a storage solutions. That's Justus, flexible as that platform able to run on premise. Just a CZ. Well, as in the cloud, even across multiple clouds. This let us tow acquisitions like bluster, SEF perma bitch in Nubia, complimenting our Pratt platform with red hat storage for us, even though this sounds very condensed, this was a decade's worth of investment, all in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer would depend on to deploy riel hybrid cloud architectures, finding any finding an amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions. When that necessarily wasn't available by twenty fourteen, our foundation had expanded, but one big challenge remained workload portability. Virtual machine formats were fragmented across the various deployments and higher level framework such as Java e still very much depended on a significant amount of operating system configuration and then containers happened containers, despite having a very long being in existence for a very long time. As a technology exploded on the scene in twenty fourteen, Cooper Netease followed shortly after in twenty fifteen, allowing containers to span multiple locations and in one fell swoop containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the on ly practical reality in way for customers and a red hat. We've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you both our customers and our partners building critical realm in open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've and while we've amassed a pretty big amount of expertise to solve most any challenge in in any area that stack, it takes more than just our own learning's to build the next generation platform. Today we're also introducing open shit for which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply just with our customers in mind. This is something that could only be possible in an open source development model and just like relish the force multiplier for servers. Open shift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them its scale. And we've also announced open shift, and we've also announced azure open shift. Last night. Satya on this stage talked about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud, regardless of whether you run it yourself or just consume it as a service. And with this flagship release, we are also introducing operators, which is the central, which is the central feature here. We talked about this work last year with the operator framework, and today we're not going to just show you today. We're not going to just show you open shift for we're going to show you operators running at scale operators that will do updates and patches for you, letting you focus more of your time and running your infrastructure and running running your business. We want to make all this easier and intuitive. So let's have a quick look at how we're doing. Just that >> painting. I know all of you have heard we're talking to pretend to new >> customers about the travel out. So new plan. Just open it up as a service been launched by this summer. Look, I know this is a big quest for not very big team. I'm open to any and all ideas. >> Please welcome back to the stage. Red Hat Global director of developer Experience burst Sutter with Jessica Forrester and Daniel McPherson. All right, we're ready to do some more now. Now. Earlier we showed you read Enterprise Clinic St running on lots of different hardware like this hardware you see right now And we're also running across multiple cloud providers. But now we're going to move to another world of Lennox Containers. This is where you see open shift four on how you can manage large clusters of applications from eggs limits containers across the hybrid cloud. We're going to see this is where suffer operators fundamentally empower human operators and especially make ups and Deb work efficiently, more efficiently and effectively there together than ever before. Rights. We have to focus on the stage right now. They're represent ops in death, and we're gonna go see how they reeled in application together. Okay, so let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today, and he's telling my ops, comfort person Let's go to call him Mr Ops. So Dan, >> thanks for with open before, we had a much easier time setting up in maintaining our clusters. In large part, that's because open shit for has extended management of the clusters down to the infrastructure, the diversity kinds of parent. When you take >> a look at the open ship console, >> you can now see the machines that make up the cluster where machine represents the infrastructure. Underneath that Cooper, Eddie's node open shit for now handles provisioning Andy provisioning of those machines. From there, you could dig into it open ship node and see how it's configured and monitor how it's behaving. So >> I'm curious, >> though it does this work on bare metal infrastructure as well as virtualized infrastructure. >> Yeah, that's right. Burn So Pa Journal nodes, no eternal machines and open shit for can now manage it all. Something else we found extremely useful about open ship for is that it now has the ability to update itself. We can see this cluster hasn't update available and at the press of a button. Upgrades are responsible for updating. The entire platform includes the nodes, the control plane and even the operating system and real core arrests. All of this is possible because the infrastructure components and their configuration is now controlled by technology called operators. Thes software operators are responsible for aligning the cluster to a desired state. And all of this makes operational management of unopened ship cluster much simpler than ever before. All right, I >> love the fact that all that's been on one console Now you can see the full stack right all way down to the bare metal right there in that one console. Fantastic. So I wanted to scare us for a moment, though. And now let's talk to Deva, right? So Jessica here represents our all our developers in the room as my facts. He manages a large team of developers here Red hat. But more importantly, she represents our vice president development and has a large team that she has to worry about on a regular basis of Jessica. What can you show us? We'LL burn My team has hundreds of developers and were constantly under pressure to deliver value to our business. And frankly, we can't really wait for Dan and his ops team to provisioned the infrastructure and the services that we need to do our job. So we've chosen open shift as our platform to run our applications on. But until recently, we really struggled to find a reliable source of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us install through the cluster. But now, with operator, How bio, we're really seeing the V ecosystem be unlocked. And the technology's there. Things that my team needs, its databases and message cues tracing and monitoring. And these operators are actually responsible for complex applications like Prometheus here. Okay, they're written in a variety of languages, danceable, but that is awesome. So I do see a number of options there already, and preaches is a great example. But >> how do you >> know that one? These operators really is mature enough and robust enough for Dan and the outside of the house. Wilbert, Here we have the operator maturity model, and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application over time through different versions or all the way out to full auto pilot, where it's automatically scaling and tuning the application based on the current environment. And it's very cool. So coming over toothy open shift Consul, now we can actually see Dan has made the sequel server operator available to me and my team. That's the database that we're using. A sequel server. That's a great example. So cynics over running here in the cluster? But this is a great example for a developer. What if I want to create a new secret server instance? Sure, we're so it's as easy as provisioning any other service from the developer catalog. We come in and I can type for sequel server on what this is actually creating is, ah, native resource called Sequel Server, and you can think of that like a promise that a sequel server will get created. The operator is going to see that resource, install the application and then manage it over its life cycle, KAL, and from this install it operators view, I can see the operators running in my project and which resource is its managing Okay, but I'm >> kind of missing >> something here. I see this custom resource here, the sequel server. But where the community's resource is like pods. Yeah, I think it's cool that we get this native resource now called Sequel Server. But if I need to, I can still come in and see the native communities. Resource is like your staple set in service here. Okay, that is fantastic. Now, we did say earlier on, though, like many of our customers in the audience right now, you have a large team of engineers. Lost a large team of developers you gotta handle. You gotta have more than one secret server, right? We do one for every team as we're developing, and we use a lot of other technologies running on open shift as well, including Tomcat and our Jenkins pipelines and our dough js app that is gonna actually talk to that sequel server database. Okay, so this point we can kind of provisions, Some of these? Yes. Oh, since all of this is self service for me and my team's, I'm actually gonna go and create one of all of those things I just said on all of our projects, right Now, if you just give me a minute, Okay? Well, right. So basically, you're going to knock down No Jazz Jenkins sequel server. All right, now, that's like hundreds of bits of application level infrastructure right now. Live. So, Dan, are you not terrified? Well, I >> guess I should have done a little bit better >> job of managing guests this quota and historically just can. I might have had some conflict here because creating all these new applications would admit my team now had a massive back like tickets to work on. But now, because of software operators, my human operators were able to run our infrastructure at scale. So since I'm long into the cluster here as the cluster admin, I get this view of pods across all projects. And so I get an idea of what's happening across the entire cluster. And so I could see now we have four hundred ninety four pods already running, and there's a few more still starting up. And if I scroll to the list, we can see the different workloads Jessica just mentioned of Tomcats. And no Gs is And Jenkins is and and Siegel servers down here too, you know, I see continues >> creating and you have, like, close to five hundred pods running >> there. So, yeah, filters list down by secret server, so we could just see. Okay, But >> aren't you not >> running going around a cluster capacity at some point? >> Actually, yeah, we we definitely have a limited capacity in this cluster. And so, luckily, though, we already set up auto scale er's And so because the additional workload was launching, we see now those outer scholars have kicked in and some new machines are being created that don't yet have noticed. I'm because they're still starting up. And so there's another good view of this as well, so you can see machine sets. We have one machine set per availability zone, and you could see the each one is now scaling from ten to twelve machines. And the way they all those killers working is for each availability zone, they will. If capacities needed, they will add additional machines to that availability zone and then later effect fast. He's no longer needed. It will automatically take those machines away. >> That is incredible. So right now we're auto scaling across multiple available zones based on load. Okay, so looks like capacity planning and automation is fully, you know, handle this point. But I >> do have >> another question for year logged in. Is the cluster admin right now into the console? Can you show us your view of >> operator suffer operators? Actually, there's a couple of unique views here for operators, for Cluster admits. The first of those is operator Hub. This is where a cluster admin gets the ability to curate the experience of what operators are available to users of the cluster. And so obviously we already have the secret server operator installed, which which we've been using. The other unique view is operator management. This gives a cluster I've been the ability to maintain the operators they've already installed. And so if we dig in and see the secret server operator, well, see, we haven't set up for manual approval. And what that means is if a new update comes in for a single server, then a cluster and we would have the ability to approve or disapprove with that update before installs into the cluster, we'LL actually and there isn't upgrade that's available. Uh, I should probably wait to install this, though we're in the middle of scaling out this cluster. And I really don't want to disturb Jessica's application. Workflow. >> Yeah, so, actually, Dan, it's fine. My app is already up. It's running. Let me show it to you over here. So this is our products application that's talking to that sequel server instance. And for debugging purposes, we can see which version of sequel server we're currently talking to. Its two point two right now. And then which pod? Since this is a cluster, there's more than one secret server pod we could be connected to. Okay, I could see right there the bounder screeners they know to point to. That's the version we have right now. But, you know, >> this is kind of >> point of software operators at this point. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. Let's do it. Live here on stage. Right, then. All >> right. All right. I could see where this is going. So whenever you updated operator, it's just like any other resource on communities. And so the first thing that happens is the operator pot itself gets updated so we actually see a new version of the operator is currently being created now, and what's that gets created, the overseer will be terminated. And that point, the new, softer operator will notice. It's now responsible for managing lots of existing Siegel servers already in the environment. And so it's then going Teo update each of those sickle servers to match to the new version of the single server operator and so we could see it's running. And so if we switch now to the all projects view and we filter that list down by sequel server, then we should be able to see us. So lots of these sickle servers are now being created and the old ones are being terminated. So is the rolling update across the cluster? Exactly a So the secret server operator Deploy single server and an H A configuration. And it's on ly updates a single instance of secret server at a time, which means single server always left in nature configuration, and Jessica doesn't really have to worry about downtime with their applications. >> Yeah, that's awesome dance. So glad the team doesn't have to worry about >> that anymore and just got I think enough of these might have run by Now, if you try your app again might be updated. >> Let's see Jessica's application up here. All right. On laptop three. >> Here we go. >> Fantastic. And yet look, we're We're into two before we're onto three. Now we're on to victory. Excellent on. >> You know, I actually works so well. I don't even see a reason for us to leave this on manual approval. So I'm going to switch this automatic approval. And then in the future, if a new single server comes in, then we don't have to do anything, and it'll be all automatically updated on the cluster. >> That is absolutely fantastic. And so I was glad you guys got a chance to see that rolling update across the cluster. That is so cool. The Secret Service database being automated and fully updated. That is fantastic. Alright, so I can see how a software operator doesn't able. You don't manage hundreds if not thousands of applications. I know a lot of folks or interest in the back in infrastructure. Could you give us an example of the infrastructure >> behind this console? Yeah, absolutely. So we all know that open shift is designed that run in lots of different environments. But our teams think that as your redhead over, Schiff provides one of the best experiences by deeply integrating the open chief Resource is into the azure console, and it's even integrated into the azure command line toll and the easy open ship man. And, as was announced yesterday, it's now available for everyone to try out. And there's actually one more thing we wanted to show Everyone related to open shit, for this is all so new with a penchant for which is we now have multi cluster management. This gives you the ability to keep track of all your open shift environments, regardless of where they're running as well as you can create new clusters from here. And I'll dig into the azure cluster that we were just taking a look at. >> Okay, but is this user and face something have to install them one of my existing clusters? >> No, actually, this is the host of service that's provided by Red hat is part of cloud that redhead that calm and so all you have to do is log in with your red hair credentials to get access. >> That is incredible. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red update. Right and red embers. Thank Satan. Now we see it for multi cluster management. But home shift so you can fundamentally see. Now the suffer operators do finally change the game when it comes to making human operators vastly more productive and, more importantly, making Devon ops work more efficiently together than ever before. So we saw the rich ice vehicle system of those software operators. We can manage them across the Khyber Cloud with any, um, shift instance. And more importantly, I want to say Dan and Jessica for helping us with this demonstration. Okay, fantastic stuff, guys. Thank you so much. Let's get Paul back out here >> once again. Thanks >> so much to burn his team. Jessica and Dan. So you've just seen how open shift operators can help you manage hundreds, even thousands of applications. Install, upgrade, remove nodes, control everything about your application environment, virtual physical, all the way out to the cloud making, making things happen when the business demands it even at scale, because that's where it's going to get. Our next guest has lots of experience with demand at scale. and they're using open source container management to do it. Their work, their their their work building a successful cloud, First platform and there, the twenty nineteen Innovation Award winner. >> Please welcome twenty nineteen Innovation Award winner. Cole's senior vice president of technology, Rich Hodak. >> How you doing? Thanks. >> Thanks so much for coming out. We really appreciate it. So I guess you guys set some big goals, too. So can you baby tell us about the bold goal? Helped you personally help set for Cole's. And what inspired you to take that on? Yes. So it was twenty seventeen and life was pretty good. I had no gray hair and our business was, well, our tech was working well, and but we knew we'd have to do better into the future if we wanted to compete. Retails being disrupted. Our customers are asking for new experiences, So we set out on a goal to become an open hybrid cloud platform, and we chose Red had to partner with us on a lot of that. We set off on a three year journey. We're currently in Year two, and so far all KP eyes are on track, so it's been a great journey thus far. That's awesome. That's awesome. So So you Obviously, Obviously you think open source is the way to do cloud computing. So way absolutely agree with you on that point. So So what? What is it that's convinced you even more along? Yeah, So I think first and foremost wait, do we have a lot of traditional IAS fees? But we found that the open source partners actually are outpacing them with innovation. So I think that's where it starts for us. Um, secondly, we think there's maybe some financial upside to going more open source. We think we can maybe take some cost out unwind from these big fellas were in and thirdly, a CZ. We go to universities. We started hearing. Is we interviewed? Hey, what is Cole's doing with open source and way? Wanted to use that as a lever to help recruit talent. So I'm kind of excited, you know, we partner with Red Hat on open shift in in Rail and Gloucester and active M Q and answerable and lots of things. But we've also now launched our first open source projects. So it's really great to see this journey. We've been on. That's awesome, Rich. So you're in. You're in a high touch beta with with open shift for So what? What features and components or capabilities are you most excited about and looking forward to what? The launch and you know, and what? You know what? What are the something maybe some new goals that you might be able to accomplish with with the new features. And yeah, So I will tell you we're off to a great start with open shift. We've been on the platform for over a year now. We want an innovation award. We have this great team of engineers out here that have done some outstanding work. But certainly there's room to continue to mature that platform. It calls, and we're excited about open shift, for I think there's probably three things that were really looking forward to. One is we're looking forward to, ah, better upgrade process. And I think we saw, you know, some of that in the last demo. So upgrades have been kind of painful up until now. So we think that that that will help us. Um, number two, A lot of our open shift workloads today or the workloads. We run an open shifts are the stateless apse. Right? And we're really looking forward to moving more of our state full lapse into the platform. And then thirdly, I think that we've done a great job of automating a lot of the day. One stuff, you know, the provisioning of, of things. There's great opportunity o out there to do mohr automation for day two things. So to integrate mohr with our messaging systems in our database systems and so forth. So we, uh we're excited. Teo, get on board with the version for wear too. So, you know, I hope you, Khun, we can help you get to the next goals and we're going to continue to do that. Thank you. Thank you so much rich, you know, all the way from from rail toe open shift. It's really exciting for us, frankly, to see our products helping you solve World War were problems. What's you know what? Which is. Really? Why way do this and and getting into both of our goals. So thank you. Thank you very much. And thanks for your support. We really appreciate it. Thanks. It has all been amazing so far and we're not done. A critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure. We've been helping our customers do that in these environments. For almost twenty years now, we've been running the most complex work loads in the world. But you know, while the public cloud has opened up tremendous possibilities, it also brings in another type of another layer of infrastructure complexity. So what's our next goal? Extend your extend your data center all the way to the edge while being as effective as you have been over the last twenty twenty years, when it's all at your own fingertips. First from a practical sense, Enterprises air going to have to have their own data centers in their own environment for a very long time. But there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge. In fact, we talked about that very early on how technology advances in computer networking is storage are changing the physical boundaries of the data center every single day. The need, the need to process data at the source is becoming more and more critical. New use cases Air coming up every day. Self driving cars need to make the decisions on the fly. In the car factory processes are using a I need to adapt in real time. The factory floor has become the new edge of the data center, working with things like video analysis of a of A car's paint job as it comes off the line, where a massive amount of data is on ly needed for seconds in order to make critical decisions in real time. If we had to wait for the video to go up to the cloud and back, it would be too late. The damage would have already been done. The enterprise is being stretched to be able to process on site, whether it's in a car, a factory, a store or in eight or nine PM, usually involving massive amounts of data that just can't easily be moved. Just like these use cases couldn't be solved in private cloud alone because of things like blatant see on data movement, toe address, real time and requirements. They also can't be solved in public cloud alone. This is why open hybrid is really the model that's needed in the only model forward. So how do you address this class of workload that requires all of the above running at the edge? With the latest technology all its scale, let me give you a bit of a preview of what we're working on. We are taking our open hybrid cloud technologies to the edge, Integrated with integrated with Aro AM Hardware Partners. This is a preview of a solution that will contain red had open shift self storage in K V M virtual ization with Red Hat Enterprise Lennox at the core, all running on pre configured hardware. The first hardware out of the out of the gate will be with our long time. Oh, am partner Del Technologies. So let's bring back burn the team to see what's right around the corner. >> Please welcome back to the stage. Red Hat. Global director of developer Experience burst Sutter with Kareema Sharma. Okay, We just how was your Foreign operators have redefined the capabilities and usability of the open hybrid cloud, and now we're going to show you a few more things. Okay, so just be ready for that. But I know many of our customers in this audience right now, as well as the customers who aren't even here today. You're running tens of thousands of applications on open chef clusters. We know that disappearing right now, but we also know that >> you're not >> actually in the business of running terminators clusters. You're in the business of oil and gas from the business retail. You're in a business transportation, you're in some other business and you don't really want to manage those things at all. We also know though you have lo latest requirements like Polish is talking about. And you also dated gravity concerns where you >> need to keep >> that on your premises. So what you're about to see right now in this demonstration is where we've taken open ship for and made a bare metal cluster right here on this stage. This is a fully automated platform. There is no underlying hyper visor below this platform. It's open ship running on bare metal. And this is your crew vanities. Native infrastructure, where we brought together via mes containers networking and storage with me right now is green mush arma. She's one of her engineering leaders responsible for infrastructure technologies. Please welcome to the stage, Karima. >> Thank you. My pleasure to be here, whether it had summit. So let's start a cloud. Rid her dot com and here we can see the classroom Dannon Jessica working on just a few moments ago From here we have a bird's eye view ofthe all of our open ship plasters across the hybrid cloud from multiple cloud providers to on premises and noticed the spare medal last year. Well, that's the one that my team built right here on this stage. So let's go ahead and open the admin console for that last year. Now, in this demo, we'LL take a look at three things. A multi plaster inventory for the open Harbor cloud at cloud redhead dot com. Second open shift container storage, providing convert storage for virtual machines and containers and the same functionality for cloud vert and bare metal. And third, everything we see here is scuba unit is native, so by plugging directly into communities, orchestration begin common storage. Let working on monitoring facilities now. Last year, we saw how continue native actualization and Q Bert allow you to run virtual machines on Cabinet is an open shift, allowing for a single converge platform to manage both containers and virtual machines. So here I have this dark net project now from last year behead of induced virtual machine running it S P darknet application, and we had started to modernize and continue. Arise it by moving. Parts of the application from the windows began to the next containers. So let's take a look at it here. I have it again. >> Oh, large shirt, you windows. Earlier on, I was playing this game back stage, so it's just playing a little solitaire. Sorry about that. >> So we don't really have time for that right now. Birds. But as I was saying, Over here, I have Visions Studio Now the window's virtual machine is just another container and open shift and the i d be service for the virtual machine. It's just another service in open shift open shifts. Running both containers and virtual machines together opens a whole new world of possibilities. But why stop there? So this here be broadened to come in. It is native infrastructure as our vision to redefine the operation's off on premises infrastructure, and this applies to all matters of workloads. Using open shift on metal running all the way from the data center to the edge. No by your desk, right to main benefits. Want to help reduce the operation casts And second, to help bring advance good when it is orchestration concept to your infrastructure. So next, let's take a look at storage. So open shift container storage is software defined storage, providing the same functionality for both the public and the private lads. By leveraging the operator framework, open shift container storage automatically detects the available hardware configuration to utilize the discs in the most optimal vein. So then adding my note, you don't have to think about how to balance the storage. Storage is just another service running an open shift. >> And I really love this dashboard quite honestly, because I love seeing all the storage right here. So I'm kind of curious, though. Karima. What kind of storage would you What, What kind of applications would you use with the storage? >> Yeah, so this is the persistent storage. To be used by a database is your files and any data from applications such as a Magic Africa. Now the A Patrick after operator uses school, been at this for scheduling and high availability, and it uses open shift containers. Shortest. Restore the messages now Here are on premises. System is running a caf co workload streaming sensor data on DH. We want toe sort it and act on it locally, right In a minute. A place where maybe we need low latency or maybe in a data lake like situation. So we don't want to send the starter to the cloud. Instead, we want to act on it locally, right? Let's look at the griffon a dashboard and see how our system is doing so with the incoming message rate of about four hundred messages for second, the system seems to be performing well, right? I want to emphasize this is a fully integrated system. We're doing the testing An optimization sze so that the system can Artoo tune itself based on the applications. >> Okay, I love the automated operations. Now I am a curious because I know other folks in the audience want to know this too. What? Can you tell us more about how there's truly integrated communities can give us an example of that? >> Yes. Again, You know, I want to emphasize everything here is managed poorly by communities on open shift. Right. So you can really use the latest coolest to manage them. All right. Next, let's take a look at how easy it is to use K native with azure functions to script alive Reaction to a live migration event. >> Okay, Native is a great example. If actually were part of my breakout session yesterday, you saw me demonstrate came native. And actually, if you want to get hands on with it tonight, you can come to our guru night at five PM and actually get hands on like a native. So I really have enjoyed using K. Dated myself as a software developer. And but I am curious about the azure functions component. >> Yeah, so as your functions is a function is a service engine developed by Microsoft fully open source, and it runs on top of communities. So it works really well with our on premises open shift here. Right now, I have a simple azure function that I already have here and this azure function, you know, Let's see if this will send out a tweet every time we live My greater Windows virtual machine. Right. So I have it integrated with open shift on DH. Let's move a note to maintenance to see what happens. So >> basically has that via moves. We're going to see the event triggered. They trigger the function. >> Yeah, important point I want to make again here. Windows virtue in machines are equal citizens inside of open shift. We're investing heavily in automation through the use of the operator framework and also providing integration with the hardware. Right, So next, Now let's move that note to maintain it. >> But let's be very clear here. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. This is open ship running on bear. Meddle with these bare metal host. >> That is absolutely right. The system can automatically discover the bare metal hosts. All right, so here, let's move this note to maintenance. So I start them Internets now. But what will happen at this point is storage will heal itself, and communities will bring back the same level of service for the CAFTA application by launching a part on another note and the virtual machine belive my great right and this will create communities events. So we can see. You know, the events in the event stream changes have started to happen. And as a result of this migration, the key native function will send out a tweet to confirm that could win. It is native infrastructure has indeed done the migration for the live Ian. Right? >> See the events rolling through right there? >> Yeah. All right. And if we go to Twitter? >> All right, we got tweets. Fantastic. >> And here we can see the source Nord report. Migration has succeeded. It's a pretty cool stuff right here. No. So we want to bring you a cloud like experience, but this means is we're making operational ease a fuse as a top goal. We're investing heavily in encapsulating management knowledge and working to pre certify hardware configuration in working with their partners such as Dell, and they're dead already. Note program so that we can provide you guidance on specific benchmarks for specific work loads on our auto tuning system. >> All right, well, this is tow. I know right now, you're right thing, and I want to jump on the stage and check out the spare metal cluster. But you should not right. Wait After the keynote didn't. Come on, check it out. But also, I want you to go out there and think about visiting our partner Del and their booth where they have one. These clusters also. Okay, So this is where vmc networking and containers the storage all come together And a Kurban in his native infrastructure. You've seen right here on this stage, but an agreement. You have a bit more. >> Yes. So this is literally the cloud coming down from the heavens to us. >> Okay? Right here, Right now. >> Right here, right now. So, to close the loop, you can have your plaster connected to cloud redhead dot com for our insights inside reliability engineering services so that we can proactively provide you with the guidance through automated analyses of telemetry in logs and help flag a problem even before you notice you have it Beat software, hardware, performance, our security. And one more thing. I want to congratulate the engineers behind the school technology. >> Absolutely. There's a lot of engineers here that worked on this cluster and worked on the stack. Absolutely. Thank you. Really awesome stuff. And again do go check out our partner Dale. They're just out that door I can see them from here. They have one. These clusters get a chance to talk to them about how to run your open shift for on a bare metal cluster as well. Right, Kareema, Thank you so much. That was totally awesome. We're at a time, and we got to turn this back over to Paul. >> Thank you. Right. >> Okay. Okay. Thanks >> again. Burned, Kareema. Awesome. You know, So even with all the exciting capabilities that you're seeing, I want to take a moment to go back to the to the first platform tenant that we learned with rail, that the platform has to be developer friendly. Our next guest knows something about connecting a technology like open shift to their developers and part of their company. Wide transformation and their ability to shift the business that helped them helped them make take advantage of the innovation. Their Innovation award winner this year. Please, Let's welcome Ed to the stage. >> Please welcome. Twenty nineteen. Innovation Award winner. BP Vice President, Digital transformation. Ed Alford. >> Thanks, Ed. How your fake Good. So was full. Get right into it. What we go you guys trying to accomplish at BP and and How is the goal really important in mandatory within your organization? Support on everyone else were global energy >> business, with operations and over seventy countries. Andi. We've embraced what we call the jewel challenge, which is increasing the mind for energy that we have as individuals in the world. But we need to produce the energy with fuel emissions. It's part of that. One of our strategic priorities that we >> have is to modernize the whole group on. That means simplifying our processes and enhancing >> productivity through digital solutions. So we're using chlo based technologies >> on, more importantly, open source technologies to clear a community and say, the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the jewel challenge and actually try and help solve that problem. That's great. So So how did these heart of these new ways of working benefit your team and really the entire organ, maybe even the company as a whole? So we've been given the Innovation Award for Digital conveyor both in the way it was created and also in water is delivering a couple of guys in the audience poll costal and brewskies as he they they're in the team. Their teams developed that convey here, using our jail and Dev ops and some things. We talk about this stuff a lot, but actually the they did it in a truly our jail and develops we, um that enabled them to experiment and walking with different ways. And highlight in the skill set is that we, as a group required in order to transform using these approaches, we can no move things from ideation to scale and weeks and days sometimes rather than months. Andi, I think that if we can take what they've done on DH, use more open source technology, we contain that technology and apply across the whole group to tackle this Jill challenge. And I think that we use technologists and it's really cool. I think that we can no use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way. So So what's the next step for you guys at BP? So moving forward, we we are embracing ourselves, bracing a clothed, forced organization. We need to continue to live to deliver on our strategy, build >> over the technology across the entire group to address the jewel >> challenge and continue to make some of these bold changes and actually get into and really use. Our technology is, I said, too addresses you'LL challenge and make the future of our planet a better place for ourselves and our children and our children's children. That's that's a big goal. But thank you so much, Ed. Thanks for your support. And thanks for coming today. Thank you very much. Thank you. Now comes the part that, frankly, I think his best part of the best part of this presentation We're going to meet the type of person that makes all of these things a reality. This tip this type of person typically works for one of our customers or with one of with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about Maura the way more in the >> week. I think the thing I like most about it is you feel that reward Just helping people I mean and helping people with stuff you enjoy right with computers. My dad was the math and science teacher at the local high school. And so in the early eighties, that kind of met here, the default person. So he's always bringing in a computer stuff, and I started a pretty young age. What Jason's been able to do here is Mohr evangelize a lot of the technologies between different teams. I think a lot of it comes from the training and his certifications that he's got. He's always concerned about their experience, how easy it is for them to get applications written, how easy it is for them to get them up and running at the end of the day. We're a loan company, you know. That's way we lean on accounting like red. That's where we get our support front. That's why we decided to go with a product like open shift. I really, really like to product. So I went down. The certification are out in the training ground to learn more about open shit itself. So my daughter's teacher, they were doing a day of coding, and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class. The people that we have on our teams, like Jason, are what make us better than our competitors, right? Anybody could buy something off the shelf. It's people like him. They're able to take that and mold it into something that then it is a great offering for our partners and for >> customers. Please welcome Red Hat Certified Professional of the Year Jason Hyatt. >> Jason, Congratulations. Congratulations. What a what a big day, huh? What a really big day. You know, it's great. It's great to see such work, You know that you've done here. But you know what's really great and shows out in your video It's really especially rewarding. Tow us. And I'm sure to you as well to see how skills can open doors for for one for young women, like your daughters who already loves technology. So I'd liketo I'd like to present this to you right now. Take congratulations. Congratulations. Good. And we I know you're going to bring this passion. I know you bring this in, everything you do. So >> it's this Congratulations again. Thanks, Paul. It's been really exciting, and I was really excited to bring my family here to show the experience. It's it's >> really great. It's really great to see him all here as well going. Maybe we could you could You guys could stand up. So before we leave before we leave the stage, you know, I just wanted to ask, What's the most important skill that you'LL pass on from all your training to the future generations? >> So I think the most important thing is you have to be a continuous learner you can't really settle for. Ah, you can't be comfortable on learning, which I already know. You have to really drive a continuous Lerner. And of course, you got to use the I ninety. Maxwell. Quite. >> I don't even have to ask you the question. Of course. Right. Of course. That's awesome. That's awesome. And thank you. Thank you for everything, for everything that you're doing. So thanks again. Thank you. You know what makes open source work is passion and people that apply those considerable talents that passion like Jason here to making it worked and to contribute their idea there. There's back. And believe me, it's really an impressive group of people. You know you're family and especially Berkeley in the video. I hope you know that the redhead, the certified of the year is the best of the best. The cream of the crop and your dad is the best of the best of that. So you should be very, very happy for that. I also and I also can't wait. Teo, I also can't wait to come back here on this stage ten years from now and present that same award to you. Berkeley. So great. You should be proud. You know, everything you've heard about today is just a small representation of what's ahead of us. We've had us. We've had a set of goals and realize some bold goals over the last number of years that have gotten us to where we are today. Just to recap those bold goals First bait build a company based solely on open source software. It seems so logical now, but it had never been done before. Next building the operating system of the future that's going to run in power. The enterprise making the standard base platform in the op in the Enterprise Olympics based operating system. And after that making hybrid cloud the architecture of the future make hybrid the new data center, all leading to the largest software acquisition in history. Think about it around us around a company with one hundred percent open source DNA without. Throughout. Despite all the fun we encountered over those last seventeen years, I have to ask, Is there really any question that open source has won? Realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day in the Red Hat was born. But we only got to that goal because of you. Many of you contributors, many of you knew toe open source software and willing to take the risk along side of us and many of partners on that journey, both inside and outside of Red Hat. Going forward with the reach of IBM, Red hat will accelerate. Even Mohr. This will bring open source general innovation to the next generation hybrid data center, continuing on our original mission and goal to bring open source technology toe every corner of the planet. What I what I just went through in the last hour Soul, while mind boggling to many of us in the room who have had a front row seat to this overto last seventeen plus years has only been red hats. First step. Think about it. We have brought open source development from a niche player to the dominant development model in software and beyond. Open Source is now the cornerstone of the multi billion dollar enterprise software world and even the next generation hybrid act. Architecture would not even be possible without Lennox at the core in the open innovation that it feeds to build around it. This is not just a step forward for software. It's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined. We have. We have witnessed open source accomplished in the last seventeen years more than what most people will see in their career. Or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future. And in the one last thing to say, it's everybody in this room and beyond. Everyone outside continue the mission. Thanks have a great sum. It's great to see it

Published Date : May 11 2019

SUMMARY :

Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Kennedy setting the gold to the American people to go to the moon. that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. So it is an honor for me to be able to show it to you live on stage today. And we're not about the clinic's eight. And Morgan, There's windows. That means that for the first time, you can log in from any device Because that's the standard Lennox off site. I love the dashboard overview of the system, You see the load of the system, some some of its properties. So what about if I have to add a whole new application to this environment? Which the way for you to install different versions of your half stack that That is fantastic and the application streams Want to keep up with the fast moving ecosystems off programming I know some people were thinking it right now. everyone you want two or three or whichever your application needs. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, I've opened the storage space for you right here, where you see an overview of your storage. you know, we'll have another question for you. you know a lot of people, including me and people in the audience like that dark out right? much easier, including a post gra seeker and, of course, the python that we saw right there. Yeah, absolutely. And it's saved so that you don't actually have to know all the various incantations from Amazon I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily so you can really hit your Clyburn hybrid cloud operating system images. and I just need a few moments for it to build. So while that's taking a few moments, I know there's another key question in the minds of the audience right now, You see all my relate machines here, including the one I showed you what Consul on before. Okay, okay, so now it's progressing. it's progressing. live upgrade on stage. Detective that and you know, it doesn't run the Afghan cause we don't support operating that. So the good news is, we were protected from possible failed upgrade there, That's the idea. And I really love what you showed us there. So you were away for so long. So the really cool thing about this bird is that all of these images were built So thank you so much for that large. more to talk to you about. I'm going to show you here a satellite inventory and his So he's all the machines can get updated in one fell swoop. And there's one thing that I want to bring your attention to today because it's brand new. I know that in the minds of the audience right now. I've actually been waiting for a while patiently for you to get to the really good stuff. there's one more thing that I wanted to let folks know about. next eight and some features that we have there. So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate OK, so we basically have this new feature. So And this is this list is growing every single day, so customers can actually opt in to the rules that are most But it comes to CVS and things that nature. This is the satellite that we saw before, and I'll grab one of the hosts and I love it so it's just a single command and you're ready to register this box right now. I'm going to show you one more thing. I know everyone's waiting for it as well, But hey, you're VM is ready. Yeah, insights is a really cool feature And I've got it in all my images already. the machines registering on cloud that redhead dot com ready to be managed. OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, We saw that in the overview, and I can actually go and get some more details about what this everybody to go try this like, we really need to get this thing going and try it out right now. don't know, sent about the room just yet. And even though it's really easy to get going on and we kind of, you know, when a little bit sideways here moments. I went brilliant. We hear about that all the time, as I just told Please welcome Lawrence Livermore National Laboratory. And thank thank you so much for coming for But first and foremost, our job is to ensure the safety, and for the geeks in the audience, I think there's a few of them out there. before And you know, Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified open source, you know, for even open source existing. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo all the way to the extract excess Excuse scale supercomputing. share any more details about that system right now, but we are hoping that we're going to be able of the data center spread across so many multiple environments, management had to be I know all of you have heard we're talking to pretend to new customers about the travel out. Earlier we showed you read Enterprise Clinic St running on lots of In large part, that's because open shit for has extended management of the clusters down to the infrastructure, you can now see the machines that make up the cluster where machine represents the infrastructure. Thes software operators are responsible for aligning the cluster to a desired state. of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us has made the sequel server operator available to me and my team. Okay, so this point we can kind of provisions, And if I scroll to the list, we can see the different workloads Jessica just mentioned Okay, But And the way they all those killers working is Okay, so looks like capacity planning and automation is fully, you know, handle this point. Is the cluster admin right now into the console? This gives a cluster I've been the ability to maintain the operators they've already installed. So this is our products application that's talking to that sequel server instance. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. And that point, the new, softer operator will notice. So glad the team doesn't have to worry about that anymore and just got I think enough of these might have run by Now, if you try your app again Let's see Jessica's application up here. And yet look, we're We're into two before we're onto three. So I'm going to switch this automatic approval. And so I was glad you guys got a chance to see that rolling update across the cluster. And I'll dig into the azure cluster that we were just taking a look at. all you have to do is log in with your red hair credentials to get access. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red Thanks so much to burn his team. of technology, Rich Hodak. How you doing? center all the way to the edge while being as effective as you have been over of the open hybrid cloud, and now we're going to show you a few more things. You're in the business of oil and gas from the business retail. And this is your crew vanities. Well, that's the one that my team built right here on this stage. Oh, large shirt, you windows. open shift container storage automatically detects the available hardware configuration to What kind of storage would you What, What kind of applications would you use with the storage? four hundred messages for second, the system seems to be performing well, right? Now I am a curious because I know other folks in the audience want to know this too. So you can really use the latest coolest to manage And but I am curious about the azure functions component. and this azure function, you know, Let's see if this will We're going to see the event triggered. So next, Now let's move that note to maintain it. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. You know, the events in the event stream changes have started to happen. And if we go to Twitter? All right, we got tweets. No. So we want to bring you a cloud like experience, but this means is I want you to go out there and think about visiting our partner Del and their booth where they have one. Right here, Right now. So, to close the loop, you can have your plaster connected to cloud redhead These clusters get a chance to talk to them about how to run your open shift for on a bare metal Thank you. rail, that the platform has to be developer friendly. Please welcome. What we go you guys trying to accomplish at BP and and How is the goal One of our strategic priorities that we have is to modernize the whole group on. So we're using chlo based technologies And highlight in the skill part of this presentation We're going to meet the type of person that makes And so in the early eighties, welcome Red Hat Certified Professional of the Year Jason Hyatt. So I'd liketo I'd like to present this to you right now. to bring my family here to show the experience. before we leave before we leave the stage, you know, I just wanted to ask, What's the most important So I think the most important thing is you have to be a continuous learner you can't really settle for. And in the one last thing to say, it's everybody in this room and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adam BallPERSON

0.99+

JessicaPERSON

0.99+

Josh BoyerPERSON

0.99+

PaulPERSON

0.99+

Timothy KramerPERSON

0.99+

DanPERSON

0.99+

JoshPERSON

0.99+

JimPERSON

0.99+

TimPERSON

0.99+

IBMORGANIZATION

0.99+

JasonPERSON

0.99+

Lars CarlPERSON

0.99+

Kareema SharmaPERSON

0.99+

WilbertPERSON

0.99+

Jason HyattPERSON

0.99+

BrentPERSON

0.99+

LenoxORGANIZATION

0.99+

Rich HodakPERSON

0.99+

Ed AlfordPERSON

0.99+

tenQUANTITY

0.99+

Brent MidwoodPERSON

0.99+

Daniel McPhersonPERSON

0.99+

Jessica ForresterPERSON

0.99+

LennoxORGANIZATION

0.99+

LarsPERSON

0.99+

Last yearDATE

0.99+

RobinPERSON

0.99+

DellORGANIZATION

0.99+

KarimaPERSON

0.99+

hundredsQUANTITY

0.99+

seventy poundsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

John F. KennedyPERSON

0.99+

AnselORGANIZATION

0.99+

oneQUANTITY

0.99+

Edward TellerPERSON

0.99+

last yearDATE

0.99+

TeoPERSON

0.99+

KareemaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

todayDATE

0.99+

PythonTITLE

0.99+

seven individualsQUANTITY

0.99+

BPORGANIZATION

0.99+

ten ten thousand timesQUANTITY

0.99+

BostonLOCATION

0.99+

ChrisPERSON

0.99+

Del TechnologiesORGANIZATION

0.99+

pythonTITLE

0.99+

TodayDATE

0.99+

thousandsQUANTITY

0.99+

Robin GoldstonePERSON

0.99+

Sazzala Reddy, Datrium & Kevin Smith, Transcore | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Welcome back everybody, Jeff Frick here with theCUBE. We're at AWS re:Invent 2018 at the Sands Convention Center and all over Vegas. I don't know how many people are here. We haven't gotten the official word. 60,000, 70,000, I don't know. There's a lot of people. We're excited to have our next guest, but before we get in, happy to be joined by Lauren Cooney. Lauren, great to see you, as always. >> Great to see you, as well. >> You know, one of my favorite things about doing Cube interviews is we learn about new industries that we didn't even know about. So, while we're here talking about IT, it's really about the application of IT that I think is really more interesting, more fun, and a great learning experience. So, we're really excited to have our next guest on. He is Kevin Smith, the director of MIS for Transcore. Kevin, great to see you. >> Hello. >> And many time Cube alumni, Sazzala Reddy. He is the CTO and co founder of Datrium. Sazzala, great to see you. >> Happy to be here. >> So, Kevin before we get into it, tells us a little about Transcore. What are you guys all about? >> Basically, we are the leading toll authority for kind of of Continental United States and we are trying to expand that throughout the world. We do the whole engineer all the way through manufacturing of toll systems for vehicles and cars throughout the U.S. So, the little stickers in you car all the way up to the readers that read them. They're coming through my place some how or some other. >> So, everything from the reader in the car-- >> Yup, the little sticker tag that sticks in your window or suction cups in. Wherever you are, yes you may hate us, but I'm not the one collecting the tolls. (laughs) >> I don't like it when you miss the picture. >> Well, let's input some design here. (laughs) >> Trust me, I've tried. (laughs) >> But then the huge back in process to pull that up, get it into the system, billing systems. >> Yeah, all integrated. Yep. >> And how big is the company? How long has it been around? >> We were acquired by Roper. We've been many divisions, but Los Alamos was technically, founding fathers 1954. >> 1954, so you've been around a long time >> Oh yeah, yes. They started with cows. >> RFID's on cows? >> Yes, tracking cows in the pastures of New Mexico. (laughs) >> With the little tags in their ears I imagine. Alright, great. We can talk about traffic probably all day long, but that's not why were here. That's not your day job you're not out there with the little RFID scanner. >> Not anymore, thank God. >> Let's talk about some of the challenges 'cause you know, obviously, the toll business has been around for a long time. But the automation of tolls has really changed a lot over the last five years. You probably know better than me from somebody in the booth taking my money and giving me a receipt to some places it's almost exclusively electronic. So, how's that business grown, and what have been some of the accompanying challenges have you seen that been grown? >> Part of the performance issues we were running into was the quantity. Because the man is gone from the booth, we have to produce more tags that become more readable. So, that creates more back in work, more transactions. And, in the long run, producing more tags. You know, we've gone to millions and millions of tags being produced, in a quarter, to where it was just hundreds of thousands. So, with that requires scalability that we can grow with our systems and our systems we had just wasn't doing it. >> So, you got the manufacturing of the tags as well, I didn't even think of the manufac- you got to make them in the first place, too. >> That is our bread and butter. Manufacturing those tags and the millions of millions of transactions that we test, because we have to test every tag that goes out the door. Every tag gets tested. >> How far away do they work, on those readers? I'm just curious. >> It depends on your speed. We've tested up to 200 miles an hour. And I think it's, like, 40-50 feet? So, as long as you're going under 200 miles an hour, we can get ya. >> Okay, so, how did you meet Sazzala in Datrium? How did that come about? >> We went looking for a product that could give us a one stop solution. We wanted something that was basically, I wanted to get out of the storage business, I wanted to get out of the management business. I didn't want to be having to worry about all these different vendors, all these different solutions. And Datrium was able to provide that. Compared to some of the other products that we were looking at, we did test with other products, and Datrium came out on top. They gave us the total package. >> Sazzala, when you looked at this oppurtunity, what did you see? Anything unique and different? What were some of the challenges that you tried to figure out how to help Kevin? >> So, what we are finding is that more and more companies, every company is a software company, every company is a data company, right? Every body wants to move faster. Everybody wants to things faster. I can't wait for my movie to start in two seconds. I'm like, Why is it taking two seconds? So, everybody wants things faster. We live in this instant economy where everything needs to be either you transform or you die. So, how do we make that transition into the speed? How do you build your data center, whatever your doing, to match that speed of innovation? Any system you're going to deploy in a data center, has to be not in the way. It has to be less management, less overhead. Look at Amazon, very successful because there is less to manage. And, you mostly manage your applications. That's what the business moral is going to be going forward. That's why people like the Cloud. Why does CIO like the Cloud? Not because it's cooler, or whatever, but because it makes things faster. It's expensive, yeah, but it makes things faster in some ways. >> Go ahead. >> I was going to say, on issue we ran into and we came to him with was our CAD designers. 'Cause we designed the product. And, the rendering was just dragging on our old systems. And, we went from two to three minutes rendering to seconds rendering new graphics. And, so, before they were like I'm not going to save it yet, I'm not going to re-render it. Now, they're re-rendering every time they're making a change. It helps in performance, it helps the application, and it helps increase the productivity of my CAD designers. >> Right. I was going to say, it was probably the customer service pretty significant, as well, so they can get the version that they want. >> Definitely, definitely. And, you know, the nice thing is is Datrium allowed us to scale. We couldn't go out and just Okay, revamp everything. You got to do baby steps. And Datrium gave us that scaleabilty, to where I could add anything from 1 to 128 nodes. You know, I was able to increase performance by just adding a server node, or increase the rights by adding a data node. That's the flexibilty that I needed from a vendor. >> So, when you said that Datrium had the whole package, you looked at some other solutions out there. When you were trying to find the whole package at the beginning of the process, what were the key attributes that you said I would love to get all these from one place? >> I was looking for performance and scale. Which I got. I was looking for back-up. God, I wanted to get out of the back-up business. I was tired of tapes, I was tired of third-party solutions. >> Tire of tapes? (laughs) >> Trust me. Shh, don't tell the tape vendors here. >> Tape is good, if you have the right application. >> Security, I stay awake at night. I lead our security teams. I stay awake worrying about Is my data protected? You know, with their encryption, that gave me that whole protection. And the last thing was DR. DR is adorned in every IT manager, every IT director, every, you know, CTO. And, with their whole Cloud shift, that DR? What DR, it's done. It just happens. And those four things is kind of what led us to finding Datrium. 'Cause some of them gave us one or two, but not everyone could give us all four of the options that we were looking for. >> What I love about the story is those are kind of concrete savings and doing your job easier. What your excited about is enabling your CAD designer, your kind of proactive sales process, your proactive design, your proactive innovation to actually move faster. That's not a cost saving mechanism. That's really a transformational, kind of positive revenue, side of the tale that I don't think is told enough. People focus on the cost savings and execution. That's not what it's about. It's really about innovating and growing your business faster. Do you think? >> Oh no, our ROI, that we calculated in, was just on hardware. Just on my cost savings that I could put a penny to. The time, it's so great. I mean, my CAD designers producing product faster, my developers are asking for more VMs. For me to spin up because the speed is so much faster. We're used to being Oh, don't touch it. I got this guy tuned exactly where I want it. We got the memory. But now, they're asking for more and more, and it's my in users, who are really the engineers, my manufacturing people, they're wanting more and more out of the product and Datirum is delivering. I don't go to dashboard and look to try and figure out how to tweak it anymore. I don't have any complaints. And, if I don't have any complaints, were doing something right. >> That's a good thing. >> So, it just works? >> Oh, it was beyond just works. >> Literally. >> Trust me, I was ready when we bought product to bring in a whole team and I was like, Oh, I'm going to have to hire all these people. And the guy came in and he goes, Okay, turn it on. Okay we're done. I was like, Nu-uh. He goes, Oh yeah, you have to plug that cord in back there. I was like, Wow. 'Cause, you know, usually it's-- >> I'm looking at a number right now, and it is 617% three year ROI. >> It's across many customers (mumbles) >> I totally believe you with what-- >> So we are aiming for a U.S. designer came and asked me one day, What should I aim for as a design principle? I said, We should aim for zero UI. That's what we should do. It should be transparent, it should just work. That's what we really aim for. I'm not saying we have zero UI today, but that's our goal. >> It's good to have goals. >> Let's just make it work automatically, right? That's kind of the goal. >> Well, and that was one thing, we wanted something integrated, so we didn't have to go looking. And, that's one thing I tell the engineers all the time. I go into the UI just to kind of see how cool the systems running. You know, because there is no issues. It just works. Everything's integrated, I don't have to go in and click and click and click and click to get through stuff. It just works and integrates well. We're a big Vmware shop, big Dell server shop. All of that, one-stop shop. I was telling Sazzala, you know, it's great when I get the e-mail that there's a problem with my Datrium system before my help desk is getting the notification. I can't buy that service. >> So, Kevin, there's a lot of peers that will be watching this show. Peers of you. Having gone through this process and now you are on the other side and you're on to some new things, in terms of innovation, what would you share with a peer whose trying to sort some of this out? It's a confusing landscape. There's so many options, and you got to do your day job, too. Besides, putting out new technology. What would you share with a peer if you're sitting down over a beverage on a Friday afternoon? >> You know, I would talk to them about having that capability, really a performance scale. Being able to not worry about controllers, not worrying about what SSDs you got to put into something to make it work. Pop 'em in. SSDs are cheap nowadays. Pop 'em in. It increases your reads. Going back to the whole no more third-party solutions for back-ups. Every SIS admin, every manager knows, back-ups are only good for restores. That's the only reason you do a back-up, is 'cause you got to do that restore. And, it becomes invisible. It's all running in the background. I don't even think about it anymore. My old systems, we still think about. That aren't on the Datrium product yet, but all our production (scoffs) When I'm backing up every hour, and my RTO almost becomes zero if something happens, you can't ask for that. That's critical, I think, for every manager, every director, even the SIS admins. No one wants to really think about back-ups. And, when you're comparing your products, take a look at that. How quick can you get something back up when that hard drive went out, you know? That's critical. And, of course, DR is, you know, everyone needs that checkbox checked for recovering. It just comes right away, with that. >> We've run out of time. Going to ask you the big question. Do you sleep better? >> Oh, much better. (laughs) Easily now. Yes. Now I get to worry about other things. Like keeping my CFO happy about something else. >> And, I've got a list of people we need to introduce to you. Definitely. >> Fortunately, you always move through your next point of failure. Once you fix one spot. Watch Lucy check out the chocolate-- >> Hey, but if I can have this one off my plate, that's one better for me. >> Well, Kevin, thanks a lot for telling your story. It's a really impressive story And, I'll think of you as I go across a Dumbarton Bridge some time. >> Think about that, yes! >> Absolutely. >> Thank you for having me. >> Sazzala, great to see you, as always. Lauren, lots of fun. I'm Jeff Frick, you're watching theCube. We're at AWS re:Invent 2018. Thanks for watching. (electronic music)

Published Date : Nov 28 2018

SUMMARY :

Brought to you by Amazon We haven't gotten the official word. He is Kevin Smith, the He is the CTO and co founder of Datrium. What are you guys all about? So, the little stickers Yup, the little sticker you miss the picture. Well, let's input some design here. (laughs) get it into the system, billing systems. Yeah, all integrated. Los Alamos was technically, They started with cows. the pastures of New Mexico. With the little tags in the booth taking my money from the booth, we have of the tags as well, and the millions of millions I'm just curious. And I think it's, like, 40-50 feet? the storage business, to be either you transform or you die. And, the rendering was just probably the customer service That's the flexibilty that at the beginning of the process, what were of the back-up business. Shh, don't tell the tape vendors here. have the right application. the options that we were looking for. People focus on the cost I don't go to dashboard and And the guy came in and I'm looking at a number I'm not saying we have zero UI today, That's kind of the goal. I get the e-mail that are on the other side and That's the only reason you Going to ask you the big question. Now I get to worry about other things. And, I've got a list of people Watch Lucy check out the chocolate-- Hey, but if I can have And, I'll think of you as I go across Sazzala, great to see you, as always.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

Lauren CooneyPERSON

0.99+

LaurenPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

KevinPERSON

0.99+

Sazzala ReddyPERSON

0.99+

millionsQUANTITY

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Kevin SmithPERSON

0.99+

two secondsQUANTITY

0.99+

New MexicoLOCATION

0.99+

SazzalaPERSON

0.99+

DatriumORGANIZATION

0.99+

oneQUANTITY

0.99+

VegasLOCATION

0.99+

U.S.LOCATION

0.99+

hundreds of thousandsQUANTITY

0.99+

TranscoreORGANIZATION

0.99+

1954DATE

0.99+

Las VegasLOCATION

0.99+

IntelORGANIZATION

0.99+

40-50 feetQUANTITY

0.99+

three minutesQUANTITY

0.99+

Dumbarton BridgeLOCATION

0.99+

617%QUANTITY

0.99+

fourQUANTITY

0.99+

Sands Convention CenterLOCATION

0.99+

LucyPERSON

0.99+

one thingQUANTITY

0.99+

four thingsQUANTITY

0.99+

DellORGANIZATION

0.98+

Friday afternoonDATE

0.98+

todayDATE

0.98+

under 200 miles an hourQUANTITY

0.98+

CubeORGANIZATION

0.98+

zeroQUANTITY

0.98+

one spotQUANTITY

0.97+

128QUANTITY

0.97+

zero UIQUANTITY

0.97+

up to 200 miles an hourQUANTITY

0.96+

MISORGANIZATION

0.95+

RoperORGANIZATION

0.95+

three yearQUANTITY

0.95+

AWSORGANIZATION

0.94+

one placeQUANTITY

0.94+

DatriumPERSON

0.94+

millions of tagsQUANTITY

0.93+

CubePERSON

0.91+

one thingQUANTITY

0.91+

first placeQUANTITY

0.9+

a quarterQUANTITY

0.88+

1QUANTITY

0.88+

Continental United StatesLOCATION

0.88+

Los AlamosORGANIZATION

0.87+

VmwareORGANIZATION

0.86+

last five yearsDATE

0.86+

a pennyQUANTITY

0.84+

60,000, 70,000QUANTITY

0.82+

DatirumORGANIZATION

0.82+

AWS re:Invent 2018EVENT

0.82+

Invent 2018EVENT

0.82+

one-stopQUANTITY

0.8+

one dayQUANTITY

0.79+

one stop solutionQUANTITY

0.78+

SISORGANIZATION

0.78+

GodPERSON

0.77+

re:Invent 2018EVENT

0.77+

millions ofQUANTITY

0.76+

theCUBEORGANIZATION

0.65+

Tim Kelton, Descartes Labs | Google Cloud Next 2018


 

>> Live from San Francisco, it's The Cube, covering Google Cloud Next 2018. Brought to you by, Google Cloud and its ecosystem partners. >> Hello everyone, welcome back this is The Cube, live in San Francisco for Google Cloud's big event. It's called Google Next for 2018, it's their big cloud show. They're showcasing all their hot technology. A lot of breaking news, a lot of new tech, a lot of new announcements, of course we're bringing it here for three days of wall-to-wall coverage live. It's day two, our next guest is Tim Kelton, co-founder of Descartes Labs, doing some amazing work with imagery and data science, AI, TensorFlow, using the Google Cloud platform to analyze nearly 15 petabytes of data. Tim, welcome to The Cube. >> Thanks, great to be here >> Thanks for coming on. So we were just geeking out before we came on camera of the app that you have, really interesting stuff you guys got going on. Again, really cool, before we get into some of the tech, talk to me about Descartes Labs, you're co-founder, where did it come from? How did it start? And what are some of the projects that you guys are working on? >> I think, therefore I am. >> Exactly, exactly. Yeah, so we're a little different story than maybe a normal start-up. I was actually at a national research laboratory, Los Alamos National Laboratory, and there was a team of us that were focused on machine learning and using datasets, like remotely sensing the Earth with satellite and aerial imagery. And we were working on that from around 2008 to 2014 and then we saw just this explosion in things like, use cases for machine learning and applying that to real world use cases. But then, at the same time, there was this explosion in cloud computing and how much data you could store and train and things like that. So we started the company in late 2014 and now here we are today, we have around 80 employees. >> And what's the main thing you guys do from a data standpoint, where does the data come from? Take a minute to explain that. >> Yeah, so we focus on kind of a lot of often geospatial-centric data, but a lot of satellite and aerial imagery. A lot of what we call remote sensing, sensors orbiting the Earth or at low aerial over the Earth. All different modalities, such as different bands of light, different radio frequencies, all of those types of things. And then we fuse them together and have them in our models. And what we've seen is there's not just the magic data set that gives you the pure answer, right? It's fusing of a lot of these data sets together to tell you what's happening and then building models to predict how those changes affect our customers, their businesses, their supply chain, all those types of things. >> Let's talk about, I want to riff on something real quick, I know I want to get to some of the tech in a second. But my kids and I talk about this all the time, I got four kids and they're now, two in high school, two in college and they see Uber. And they see Uber remapping New York City every five minutes with the data that they get from the GPS. And we started riffing on drones and self-driving cars or aerial cars, if we want to fly in the air with automated helicopters or devices, you got to have some sort of coordinate system. We need this geospatial, and so, I know it's fantasy now, but what you guys are kind of getting at could be an indicator of the kind of geospatial work that's coming down later. Right now there's some cool things happening but you'd need kind of a name space or coordinates so you don't bump into something or are these automated drones don't fly near airports, or cell towers, or windmills, wind farms. >> Yeah, and those are the types of problems we solve or we look to solve, change is happening over time. Often it's the temporal cadence that's almost the key indicator in seeing how things are actually changing over time. And people are coming to us and saying, "Can you quantify that?" We've done things like agriculture and looking at crops grown, look at every single farm all over the whole U.S. and then build that into our models and say how much corn is grown at this field? And then test it back over the last 15 years and then say, as we get new imagery coming in, just daily flooding in through our Cloud Native platform, then just rerunning those models and saying, are we producing more today or less today? >> And then how is that data used, for example, take the agriculture example and that's used to say, okay, this region is maybe more productive than this region? Is it because of weather? Is it because of other things that they're doing? >> You can go back through all different types of use cases, everything from maybe if you're insuring that crop, you would might want to know if that's flooded more on the left side of the road or the right side of the road, as a predictive indicator. You might say, this is looking like a drought year. How have we done in drought years of 2007 and-- >> You look at irrigation trends. >> And you were talking off-camera about the ground truth, can you use IOT to actually calibrate the ground truth? >> Yeah and that's the sensor infusion we're seeing, everywhere around us we're seeing just floods and floods of sensors, so we have the sensors above the Earth looking down, but then as you have more and more sensors on the ground, that's the set of ground truth that you can train and calibrate. You could go back and train and train over again. It's a lot harder problem than, is this a cat or a dog? >> Yeah that's why I was riffing on the concept of a name space, the developer concept around, this is actually space. If you want to have flying drones deliver packages to transportation, you're going to need to know, some sort of triangulation, know what to do. But I got to ask you a question, so what are some of the problems that you're asked to look at, now that you have, you have the top-down view geospace, you got some ground truth sensor exploding in with more and more devices at the network, as a instrument anywhere it can have the IP or whatnot. What are some of the problems that you guys get asked to look at, you mentioned the agriculture, what else are you guys solving? >> Any sort of land use or land classification, or facilities and facility monitoring. It could be any sort of physical infrastructure that you're wanting to quantify and predict how those changes over time might impact that business vertical. And they're really varied, they're everything from energy and agriculture, and real estate, and things like that. Just last Friday, I was talking with, we have a two parts to our company. We have from the tech side, we have the engineering side which is normal engineering, but then we also have this applied science, where we have a team of scientists that are trying to build models often for our customers. 'Cause they're not, this is geospatial and machine learning, that's a rare breed of person. >> You don't want to cross pollinate. >> Yeah, and that's just not everywhere. Not all of our customers have that type of individual. But they were telling me, they were looking at the hurricane season coming up this Fall, and they had a building detector and they can detect all the buildings. So in just a couple hours, they ran that over all of the state of Florida and identified every building in the whole state of Florida. So now, as the seasons come in, they have a way to track that. >> They can be proactive and notify someone, hey you're building might need some boards on it or some sort of risk. >> Yeah and the last couple years look at all the weather events. In California we've had droughts and fires, but then you have flooding and things like that. And you're even able to start taking new types of sensors that are coming out, like the European Space Agency has a sensor that we ingest and it does synthetic aperture radar, where it's sending a radar signal down to the Earth and capturing it. So you can do things like water levels in reservoirs and things like that. >> And look at irrigation for farming, where is the droughts going to be? Where is the flooding going to be? So, for the folks watching, go to descarteslabs.com/search they got a search engine there, I wish we could show it on screen here but we don't have the terminal for it on this show. But it's a cool demo, you can search and find, you can pick an area, football field, and irrigation ditch, anything, cell tower, wind farm, and find duplicates and it gives you a map around the country. So the question is, is that, what is going on in the tech? 'Cause you got to use Cloud for this, so how do you make it all happen? >> Yeah, so we have two real big components to our tech space the first is, obviously we have lots and lots of satellite and aerial imagery, that's one of the biggest and messiest data sets and there's all types of calibration workloads that we have to do. So we have this ingest pipeline that processes it, cleans it, calibrates it, removes the clouds, not as in cloud computing infrastructure, but as in clouds over the head and then the shadows they emit down on the Earth. And we have this big ingestion process that cleans it all. And then finally compresses it and then we use things like GCS as an infinitely scalable object store. And what we really like on the GCS side is the performance we get 'cause we're reading and pulling in and out that compressed imagery all day long. So every time you zoom in or zoom out, like we're expanding it and removing that, but then our models, sometimes what we've done is, we'll want to maybe we're making a model in vegetation and we just want to look at the infrared bands. So we'll want to fuse together satellites from many different sources, fuse together ground sources, sensor sources, and just maybe pull in just one of those bands of light, not pull the whole files in. So that's what we've been building on our API. >> So how do you find GCP? What do you like? We've been all the users this week, what are the strengths? What are some of the weaknesses? What's on their to-do list? Documentation comes up a lot, we'd like to see better documentation, okay that's normal but what's your perspective? >> If you write code or develop, you always want something, you know it's always out of feature parody and stuff. From our perspective, the biggest strengths of GCP, one of the most core strengths is the network. The performance we've been able to see from the network is basically on par with what used to have, when we were at national laboratories we'd have access to high performance, super computing, some of the biggest clusters in the world. And in the network, in GCS and how we've been able scale linearly, like our ingest pipelines, we processed a petabyte of data on GCP in 16 hours through our processing pipeline on 30,000 cores. And we'll just scale that network bandwidth right up. >> Do you tap the premium network service or is it just the standard network? >> This is just stock. That was actually three years ago that we got to our bandwidth. >> How many cores? >> That was 30,000. >> Cause Google talked this morning about their standard network and the premium network, I don't know if you saw the keynote, with you get the low latency, if you pay a little bit more, proximate to your users, but you're saying on the standard network, you're getting just incredible... >> That was early 2015, it's just a few people in our company scaling up our ingest pipeline. We look at that, from then that was 40 years of imagery from NASA's Landsat program that we pulled in. And not that far off in the future, that petabyte's going to be a daily occurrence. So we wanted our ingest to scale and one of our big questions early on is actually, could the cloud actually even handle that type of scale? So that was one of the earliest workloads on things like-- >> And you feel good now about right? >> Oh yeah, and that was one of the first workloads on preemptible instances as well. >> What's on the to-do list? What would make your life better? >> So we've been working a lot with Istio that was shown here. So we actually gave a demo, we were in a couple talks yesterday on how we leverage and use Istio on our microservices. Our APIs are all built on that and so is our multi tenant SAS platform. So our ML team, when they're building models, they're all building models off different use cases, different bands of light, different geographic regions, different temporal windows. So we do all of that in Kubernetes and so those are all-- >> And what does Istio give you guys? What's the benefit of Istio? >> For us, we're using it on a few of our APIs and it's things like, really being able to see when you've start splitting out these microservices that network and that node-to-node or container-to-container latency and where things break down. Being about to do circuit retries or being able to try a response three different times before I return back a 500 or rate limit some of your APIs so they don't get crushed or you can scale them appropriately. And then actually being able to make custom metrics and to be able to fuse that back into how GKE scales on the node pools and stuff like that. >> So okay, that's how you're using it. So you were talking about Istio before, there's things that you'd like to see that aren't there today? More maturity or? >> Yeah I think Istio's like a very early starting point on all of this types of tools. >> So you want more? >> Oh yeah, definitely, definitely but I love the direction they're going and I love that it's open and if I ever wanted to I could build it on prem. But we were built basically native in the cloud so all of our infrastructure's in the cloud. We don't even have a physical server. >> What does open do for you, for your business? Is it just a good feeling? Do you feel like you're less locked in? Does it feel like you're giving back to the community? >> We read the Kubernetes source code. We've committed changes. Just recently, there's Google's open source, the OpenCensus library for tracing and things like that. We committed PRs back into that last week. We're looking for change. Something that doesn't quite work how we want, we can actually go.. >> Cause you're upstream >> Add value... >> For your business. >> We get in really hard problems, you kind of need to understand that code sometimes at that level. Build Tools, where Google took their internal tool, Blaze and opened source that bezel and so we're been using that. We're using that on our monorepos to do all of our builds. >> So you guys take it downstream, you work on it, and then all upstream contributions, is that how it works? >> Sometimes. >> Whenever you need to. >> Even Kubernetes, we've looked, if nothing else we've looked at the code multiple times and say, "Oh, this is why that autoscaler is behaving this way." Actually now I can understand how to change my workload a little bit and alter that so that the scaler works a little bit more performantly or we extract that last 10% of performance out to try and save that last 10%. >> This is a fascinating, I would love to come visit you guys and check out the facilities. It's the coolest thing ever. I think it's the future, there's so much tech going on. So many problems that are new and cool. You got the compute to boot behind it. Final question for you, how are you using analytics and machine learning? What's the key things you're using from Google? What are you guys building on your own? If anything, can you share a quick note on the ML and the analytics, how you guys are scaling that up? >> We've been using TensorFlow since very early days that geovisual search that you were saying, where we user TensorFlow models in some of those types of products. So we're big fans of that as well. And we'll keep building out models where it's appropriate. Sometimes we use very simple packages. You're just doing linear regression or things like that. >> So you're just applying that in. >> Yeah, it's the right tool for the right problem and always picking that and applying that. >> And just quick are you guys are for-profit, non-profit? What's the commercial? >> Yeah, we're for-profit, we're a Silicon Valley VC-backed company, even though we're in the mountains. >> Who's in the VCs? Which VCs are in? >> CrosslinK Capital is one our leading VCs, Eric Chin and that team down there and they've been great to work with. So they took a chance in a crazy bunch of scientists from up in the mountains of New Mexico. >> That sounds like a good VC back opportunity. >> Yeah and we had a CEO that was kind of from the Bay Area, Mark Johnson, and so we needed kind of both of those to really be successful. >> I mean I'm a big believer you throw money at great smart people and then merging markets like this. And you got a mission that's super cool, it's obvious that it's a lot to do and there's opportunities as well. >> Tremendous opportunities. Congratulations, Tim. Thanks for coming on The Cube. Tim Kelton, he's the co-founder at Descartes Labs. Here in The Cube, breaking down, bringing the technology, they got applied physicists, all these brains working on the geospatial future for The Cube. We are geospatial here in The Cube, in Google Next in San Francisco, I'm John Furrier, Dave Vellante, stay with us, for more coverage after this short break.

Published Date : Jul 25 2018

SUMMARY :

Brought to you by, Google Cloud a lot of new announcements, of of the app that you have, and applying that to real world use cases. And what's the main thing you guys do that gives you the pure answer, right? of the tech in a second. and then say, as we get on the left side of the road Yeah and that's the But I got to ask you a question, We have from the tech side, So now, as the seasons come in, and notify someone, Yeah and the last couple years and it gives you a map around the country. the first is, obviously we And in the network, in GCS that we got to our bandwidth. and the premium network, And not that far off in the future, one of the first workloads Kubernetes and so those are all-- on the node pools and stuff like that. So you were talking about Istio before, on all of this and I love that it's open We read the Kubernetes source code. and opened source that bezel so that the scaler works and the analytics, how you that you were saying, and always picking that and applying that. Yeah, we're for-profit, Eric Chin and that team down there That sounds like a Mark Johnson, and so we And you got a mission that's super cool, Tim Kelton, he's the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim KeltonPERSON

0.99+

Dave VellantePERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Mark JohnsonPERSON

0.99+

NASAORGANIZATION

0.99+

TimPERSON

0.99+

CaliforniaLOCATION

0.99+

Descartes LabsORGANIZATION

0.99+

twoQUANTITY

0.99+

John FurrierPERSON

0.99+

Descartes LabsORGANIZATION

0.99+

EarthLOCATION

0.99+

San FranciscoLOCATION

0.99+

30,000QUANTITY

0.99+

30,000 coresQUANTITY

0.99+

Eric ChinPERSON

0.99+

40 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

16 hoursQUANTITY

0.99+

Bay AreaLOCATION

0.99+

four kidsQUANTITY

0.99+

firstQUANTITY

0.99+

two partsQUANTITY

0.99+

CrosslinK CapitalORGANIZATION

0.99+

late 2014DATE

0.99+

early 2015DATE

0.99+

three daysQUANTITY

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

UberORGANIZATION

0.99+

New York CityLOCATION

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

three years agoDATE

0.99+

last FridayDATE

0.99+

500QUANTITY

0.98+

oneQUANTITY

0.98+

last weekDATE

0.98+

10%QUANTITY

0.98+

descarteslabs.com/searchOTHER

0.97+

around 80 employeesQUANTITY

0.97+

2018DATE

0.97+

2014DATE

0.97+

New MexicoLOCATION

0.97+

2007DATE

0.97+

U.S.LOCATION

0.96+

this weekDATE

0.96+

FloridaLOCATION

0.96+

this FallDATE

0.94+

KubernetesTITLE

0.94+

2008DATE

0.94+

OpenCensusTITLE

0.92+

IstioORGANIZATION

0.9+

last 15 yearsDATE

0.89+

nearly 15 petabytesQUANTITY

0.89+

last couple yearsDATE

0.88+

first workloadsQUANTITY

0.87+

Google CloudTITLE

0.86+

TensorFlowTITLE

0.86+

couple hoursQUANTITY

0.81+

IstioTITLE

0.81+

threeQUANTITY

0.8+

The CubeORGANIZATION

0.8+

every five minutesQUANTITY

0.77+

day twoQUANTITY

0.77+

BlazeTITLE

0.77+

Los Alamos National LaboratoryORGANIZATION

0.76+

two real big componentsQUANTITY

0.76+