Gunnar Hellekson & Joe Fernandes, Red Hat | AWS re:Invent 2021
welcome back to the cube coverage of aws re invent 2021 i'm john furrier your host of the cube this segment we're going to talk about red hat and the aws evolving partnership a great segment really talking about how hybrid and the enterprise are evolving certainly multi-cloud on the horizon but a lot of benefits in the cloud we've been covering on the cube and on siliconangle with red hat for the past year very relevant we've got gunner helixon gm of red hat enterprise linux and joe fernandez vp of and gm of the hybrid platforms both of red hat gentlemen thanks for coming on the cube yeah thanks for having us thanks for having us john so you know you know me i'm a fanboy of red hat so i always say you know you guys made all the right investments openshift all these things that you guys made decisions years ago playing out beautifully and i think you know with amazon's reinvent you're seeing the themes all play out modern application stack you're starting to see things at the top of the stack evolving you've got 5g in the edge workloads being redefined and expanded on the cloud with cloud scale so everything has been going down to hybrid and enterprise grade level discussions this is on the wheelhouse of red hat so one congratulations but what's your reaction what do you guys see this year at re invent what's the top story i can start first yeah sure i mean i mean clearly you know aws itself is huge but as you mentioned the world is hybrid right so customers are running uh still in their data center in the amazon public cloud across multiple public clouds and out to the edge and bringing more and more workloads right so it's not just the applications it's analytics it's ai it's machine learning and so yeah we just can expect to see more discussion around that more great examples of customer use cases and as you mentioned red hat's been right in the middle of this for some time john you guys also had some success with with the fully managed open shift service called rosa rosa which is red hat open shift service on advanced another acronym but really this is about the what the customers are looking for can you take us through an update on openshift on aws because the combination of managed services in the cloud refactoring applications but working on premises is a big deal take us through that why that's so important yeah so we've had customers running uh openshift on aws for a long time right so whether it's our software uh offerings where customers deploy openshift themselves or you know our fully managed cloud service we've had cloud services on aws for over five years uh what rosa brings a red hat open shift on aws is a jointly managed service right so we're working in partnership with uh with amazon with aws to make openshift available as a jointly managed service offering it's a native aws service offering you can get it right through the aws console you can leverage your aws committed spend but most importantly you know it's something that we're working on together bringing new customers to the table for both red hat and aws and we're really excited about it because it's really helping customers accelerate their move to the public cloud and and really helping them uh you know drive that hybrid strategy that we talked about gun early you know i want to get your thoughts on this because one of the things that i love about this market right now is open source continues to be amazing continues to drive more value and this new migration of talent coming in the numbers are just continuing to to grow and grow but the importance of red hat's history with aws is pretty significant i mean red hat pioneered open source uh and has been involved with aws from the early days can you take us through a little bit of the history for the folks that may not know red hat's partnership with aws yeah i mean we've been collaborating with aws since uh 2008 so for over a decade we've been working together and what's made the partnership work is uh that we have a common interest in making sure that uh customers have a consistent approachable experience whether they're going on-premise or in the cloud nobody wants to have to go through an entire retraining and retooling exercise just to take advantage of all the great all the great advantages of the cloud and so being able to use something like red hat enterprise linux as a consistent substrate on which you can build your application platforms is really attractive so that's where the partnership started and since then we've had the ability to better integrate with the native aws services and one thing i want to point out is that you know a lot of these a lot of these integrations are kind of technical well but these are also uh it's not just about technical consistency um across these platforms it's also about operational consistency and business concerns and when you're moving into an open hybrid cloud kind of a situation that's what becomes important right you don't want to have two completely different tool sets on two completely different platforms you want as much consistency as possible as you move from one to the other and i think you and i think a lot of customers see value in that both for the retta enterprise linux side of the business and also on the openshift side of the business well that's interesting i'd love to get your both perspective on this whole enterprise focus because you know the enterprises as you know guys you've been there from the beginning they have requirements and they're sometimes they're different by enterprise so as you see cloud i mean i remember the early days of amazon it's the 15th year of aws 10th year of reinvent as a conference i mean that seems like a lifetime ago but that's not not too far ago where you know there's like well amazon might not make it it's only for developers enterprises do their own thing now it's like it's all about the enterprise how are enterprise customers evolving with you guys because they're all seeing the benefit of re-platforming but as they refactor how has red hat evolved with that with that trend how have you helped amazon yeah so as we mentioned you know enterprises you know really across the globe are adopting a hybrid cloud strategy but hybrid actually isn't just about the infrastructure so certainly the infrastructure where these enterprises are running this application is increasingly becoming hybrid as you move from data center to multiple public clouds and out to the edge but the enterprises application portfolios are also hybrid right it's a hybrid mix of very traditional monolithic anterior type applications but also new cloud native services that have either been filled built from scratch or as you mentioned you know existing applications have been refactored and then they're moving beyond the applications as i mentioned to make better use of data also evolving their processes right for how they you know build deploy and manage you know leveraging ci cd and git ops and so forth so really for us it's how do you help enterprises bring all that together right manage this hybrid infrastructure that's supporting this you know hybrid portfolio of applications and really help them evolve their processes we've been uh you know working with enterprises on these types of challenges for a long time and and we're you know now partnering with amazon to do the same in terms of our joint product and service offerings talk about the rel evolution i mean because that's the bread and butter for red hat's been there for a long time open shift again making earlier i mentioned the bets you guys made with kubernetes for instance and has all been made all the right moves so i love rosa you got me sold on that rail though has been the the tr the tried and true steady uh workhorse how has that evolved uh with workloads yeah you know it's interesting it's uh uh i think when when customers were at the stage when they were wondering if uh well can i use aws for to solve my problem or where should i use aws to solve my problem our focus was largely on kind of technical enablement can we keep up with the pace of new hardware that amazon is rolling up you know can we can we ensure that consistency with on-premise and off-premise and i think now we're starting to shift focus into uh really differentiating rel on the aws platform again integrating natively with aws services making it easier to operate in aws um and a good example of this is using tools like red hat insights which we announced i guess about a year ago which is now included in every red hat enterprise linux subscription using tools like insights in order to give customers advice on maybe potential problems that are coming up helping customers solve them kelvin customers identify problems before they before they happen helping them with performance problems um and uh again having uh additional tools like that additional cloud-based tools um makes rel uh as easy to use on the on the cloud despite all the complexity of all the you know the redeploying refactoring microservices there's now a proliferation of infrastructure options um and to the extent that rail can be the thing that is consistent solid reliable secure uh just as customers are customers getting in um then then we can make customers successful you know joe we talked about this last time we were chatting i think red hat summit or ansible fest i forget which event it was but we were talking about how modern application developers at the top of the stack just want to code they want to write some code and now they want the infrastructure's code aka devops devsecops but as this trend of moving up the stack continues to be a big theme at reinvent um there requires automation that requires a lot of stuff to happen under the covers red hat's at the center of all this action from from historical perspective pre-existing enterprises before cloud now during cloud and soon to be cloud scale how do you see that evolving because how are customers shaping their architecture because i mean this is distributed computing in the cloud it's it's essentially we've seen this movie before but now at such a scale where data security these are all new elements how do you how do you talk about that yeah well first of all as as gunner linux is a given right linux is going to be available in every environment data center public cloud edge linux combined with linux containers and kubernetes that's the abstraction like separating abstracting the applications away from the infrastructure and now it's all about how do you build on top of that to bring that automation that you mentioned right so you know we're very focused on helping customers really build you know fully automated end-to-end deployment pipelines so they can build their applications more efficiently they can automate the the continuous integration and deployment of those applications into whatever cloud or edge footprint they choose and then they can promote across environments because again it's not just about developing the applications it's about moving them all the way through to production you know where you know work their customers are relying on you know on those services to do their work and so forth and so that's that's what we're doing is you know obviously uh i think linux is a given linux containers kubernetes you know those decisions you know have been made and now it's a matter of how can we put that together uh with the automation that allows them to accelerate those deployments out to production so customers can take advantage of them you know gunner we were always joking on the cube you know i was old enough remember when we used to install linux on a server back in the day you know now a lot of these young developers never actually act to install the software and do some of those configurations because it's all automated now again the commoditization and automation trend abstraction layers some say is a good thing um so how do you see the evolution of this devops movement with the partnership of aws going forward what types of things are you working on with amazon web services and what kind of offerings can customers look forward to yeah sure so i mean it used to be that uh as you say you know linux was something that you managed with a mouse and a keyboard and uh and i think it's been quite a few years since uh since any significant amount of linux has been managed for the mouse and keyboard a lot of it is uh whatever scripts automation tools configuration management tools things like this and the investments we've made both in rel and then specifically uh rel on aws is around enabling rail to be more manageable um and so including things like something we call system roles so these are ansible modules that kind of automate routine systems administration tasks um we've made investments in something called image builder and so this is a tool that allows customers to kind of compose the operating system that they need create a blueprint for it and then kind of stamp out uh the same image whether it's uh an iso image you know so you can install it on premise or in it or in mi so we can deploy it in aws so again helping customers it's the problem used to be helping customers package and manage dependencies and and that kind of old world three and a half inch floppy disk kind of linux problems um and now we've evolved towards making uh making linux easier to deploy and manage at a grand scale um both whether you're in aws or whether you're on premise joe take us through the hybrid story i know obviously success with openshift's managed service on aws uh what's the update there for you what what are customers expecting this re invent and what's the story for uh for you guys yeah so you know the openshift managed services business is the fastest growing segment of our business we're seeing uh lots of new customers and again you know bringing new customers i think for both uh red hat and and aws through this service um so we expect to to hear from from customers uh at re invent about what they're doing again and not not only with uh with openshift and our uh our red hat solutions but really with with what they're building on top of those uh service offerings of those solutions to to sort of bring more value to their customers so that to me that's always the best part of re invent is is really hearing from customers and you know when we all start going there in person again to actually be able to meet with them one-on-one uh whether it's in person or virtual so far so looking forward to that well great to have you guys on thecube congratulations on all the success the enterprise continues to adopt more and more cloud which benefits all the work you guys have done both on the rail side and as you guys modernized with all these great services and managed services continues to be the center of all the action thanks for coming on appreciate it thanks john okay red hat's partnership with aws evolving as cloud scale edge all happening all distributed computing all happening at large scale it's thecube with cube coverage of aws re invent 2021 i'm john furrier thanks for watching [Music] you
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
amazon | ORGANIZATION | 0.99+ |
three and a half inch | QUANTITY | 0.99+ |
john furrier | PERSON | 0.99+ |
Joe Fernandes | PERSON | 0.99+ |
aws | ORGANIZATION | 0.99+ |
over five years | QUANTITY | 0.99+ |
linux | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
2021 | DATE | 0.98+ |
openshift | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
john | PERSON | 0.97+ |
2008 | DATE | 0.95+ |
one thing | QUANTITY | 0.95+ |
rosa | ORGANIZATION | 0.94+ |
Gunnar Hellekson | PERSON | 0.94+ |
two completely different platforms | QUANTITY | 0.93+ |
years | DATE | 0.93+ |
red hat | TITLE | 0.93+ |
about a year ago | DATE | 0.91+ |
over a decade | QUANTITY | 0.91+ |
re invent | ORGANIZATION | 0.91+ |
red hat | ORGANIZATION | 0.9+ |
AWS | ORGANIZATION | 0.89+ |
10th year | QUANTITY | 0.89+ |
first | QUANTITY | 0.88+ |
joe fernandez | PERSON | 0.87+ |
openshift | TITLE | 0.85+ |
this year | DATE | 0.85+ |
two completely different tool sets | QUANTITY | 0.82+ |
15th year | QUANTITY | 0.82+ |
things | QUANTITY | 0.79+ |
lot of customers | QUANTITY | 0.77+ |
linux | ORGANIZATION | 0.75+ |
red hat summit | EVENT | 0.7+ |
Hat | ORGANIZATION | 0.67+ |
ership | PERSON | 0.67+ |
past year | DATE | 0.65+ |
Stefanie Chiras & Joe Fernandes, Red Hat | KubeCon + CloudNativeCon NA 2020
>>from around the globe. It's the Cube with coverage of Yukon and Cloud. Native Con North America 2020 Virtual brought to you by Red Hat The Cloud, Native Computing Foundation and Ecosystem Partners. Hello, everyone. And welcome back to the cubes Ongoing coverage of Cuba con North America. Joe Fernandez is here. He's with Stephanie, Cheras and Joe's, the V, P and GM for core cloud platforms. That red hat and Stephanie is this s VP and GM of the Red Hat Enterprise. Lennox bu. Two great friends of the Cube. Awesome seeing you guys. How you doing? >>It's great to be here, Dave. Yeah, thanks >>for the opportunity. >>Hey, so we all talked, you know, recently, uh, answerable fest Seems like a while ago, but But we talked about what's new? Red hat really coming at it from an automation perspective. But I wonder if we could take a view from open shift and what's new from the standpoint of you really focus on helping customers, you know, change their operations and operationalize. And Stephanie, Maybe you could start, and then, you know, Joe, you could bring in some added color. >>No, that's great. And I think you know one of the things we try and do it. Red hat clearly building off of open source. We have been focused on this open hybrid cloud strategy for, you know, really years. Now the beauty of it is that hybrid cloud and open hybrid cloud continues to evolve right with bringing in things like speed and stability and scale and now adding in other footprints, like manage services as well as edge and pulling that all together across the whole red hat portfolio from the platforms, right? Certainly with Lennox and roll into open shift in the platform with open shift and then adding automation, which certainly you need for scale. But it's ah, it's continues to evolve as the as the definition of open hybrid cloud evolves. >>Great. So thank you, Stephanie jokes. You guys got hard news here that you could maybe talk about 46? >>Yeah. Eso eso open shift is our enterprise kubernetes platform. With this announcement, we announced the release of open ship 4.6 Eso eso We're doing releases every quarter tracking the upstream kubernetes release cycle. So this brings communities 1.19, which is, um but itself brings a number of new innovations, some specific things to call out. We have this new automated installer for open shift on bare metal, and that's definitely a trend that we're seeing is more customers not only looking at containers but looking at running containers directly on bare metal environments. Open shift provides an abstraction, you know, which combines Cuban. And he's, uh, on top of Lennox with RL. I really across all environments, from bare metal to virtualization platforms to the various public clouds and out to the edge. But we're seeing a lot of interest in bare metal. This is basically increasing the really three automation to install seamlessly and manage upgrades in those environments. We're also seeing a number of other enhancements open shifts service mesh, which is our SDO based solution for managing, uh, the interactions between micro services being able to manage traffic against those services. Being able to do tracing. We have a new release of that on open shift Ford out six on then, um, some work specific to the public cloud that we started extending into the government clouds. So we already supported AWS and Azure. With this release, we added support for the A W s government cloud as well. Azaz Acela's Microsoft Azure government on dso again This is really important to like our public sector customers who are looking to move to the public cloud leveraging open shift as an abstraction but wanted thio support it on the specialized clouds that they need to use with azure gonna meet us Cup. >>So, joke, we stay there for a minute. So so bare metal talking performance there because, you know, you know what? You really want to run fast, right? So that's the attractiveness there. And then the point about SDO in the open, open shift service measure that makes things simpler. Maybe talk a little bit about sort of business impact and what customers should expect to get out of >>these two things. So So let me take them one at a time, right? So so running on bare metal certainly performances a consideration. You know, I think a lot of fixed today are still running containers, and Cuban is on top of some form of virtualization. Either a platform like this fear or open stack, or maybe VMS in the in one of the public clouds. But, you know containers don't depend on a virtualization layer. Containers only depend on Lennox and Lennox runs great on bare metal. So as we see customers moving more towards performance and Leighton see sensitive workloads, they want to get that Barry mental performance on running open shift on bare metal and their containerized applications on that, uh, platform certainly gives them that advantage. Others just want to reduce the cost right. They want to reduce their VM sprawl, the infrastructure and operational cost of managing avert layer beneath their careers clusters. And that's another benefit. So we see a lot of uptake in open shift on bare metal on the service match side. This is really about You know how we see applications evolving, right? Uh, customers are moving more towards these distributed architectures, taking, you know, formally monolithic or enter applications and splitting them out into ah, lots of different services. The challenge there becomes. Then how do you manage all those connections? Right, Because something that was a single stack is now comprised of tens or hundreds of services on DSO. You wanna be able to manage traffic to those services, so if the service goes down, you can redirect that those requests thio to an alternative or fail over service. Also tracing. If you're looking at performance issues, you need to know where in your architecture, er you're having those degradations and so forth. And, you know, those are some of the challenges that people can sort of overcome or get help with by using service mash, which is powered by SDO. >>And then I'm sorry, Stephanie ever get to in a minute. But which is 11 follow up on that Joe is so the rial differentiation between what you bring in what I can just if I'm in a mono cloud, for instance is you're gonna you're gonna bring this across clouds. I'm gonna You're gonna bring it on, Prem And we're gonna talk about the edge in in a minute. Is that right? From a differentiation standpoint, >>Yeah, that That's one of the key >>differentiations. You know, Read has been talking about the hybrid cloud for a long time. We've we've been articulating are open hybrid cloud strategy, Andi, >>even if that's >>not a strategy that you may be thinking about, it is ultimately where folks end up right, because all of our enterprise customers still have applications running in the data center. But they're also all starting to move applications out to the public cloud. As they expand their usage of public cloud, you start seeing them adopted multi cloud strategies because they don't want to put all their eggs in one basket. And then for certain classes of applications, they need to move those applications closer to the data. And and so you start to see EJ becoming part of that hybrid cloud picture on DSO. What we do is basically provide a consistency across all those environments, right? We want run great on Amazon, but also great on Azure on Google on bare metal in the data center during medal out at the edge on top of your favorite virtualization platform. And yeah, that that consistency to take a set of applications and run them the same way across all those environments. That is just one of the key benefits of going with red hat as your provider for open hybrid cloud solutions. >>All right, thank you. Stephanie would come back to you here, so I mean, we talk about rail a lot because your business unit that you manage, but we're starting to see red hats edge strategy unfolded. Kind of real is really the linchpin I wanna You could talk about how you're thinking about the edge and and particularly interested in how you're handling scale and why you feel like you're in a good position toe handle that massive scale on the requirements of the edge and versus hey, we need a new OS for the edge. >>Yeah, I think. And Joe did a great job of said and up it does come back to our view around this open hybrid cloud story has always been about consistency. It's about that language that you speak, no matter where you want to run your applications in between rela on on my side and Joe with open shift and and of course, you know we run the same Lennox underneath. So real core os is part of open shift that consistently see leads to a lot of flexibility, whether it's through a broad ecosystem or it's across footprints. And so now is we have been talking with customers about how they want to move their applications closer to data, you know, further out and away from their data center. So some of it is about distributing your data center, getting that compute closer to the data or closer to your customers. It drives, drives some different requirements right around. How you do updates, how you do over the air updates. And so we have been working in typical red hat fashion, right? We've been looking at what's being done in the upstream. So in the fedora upstream community, there is a lot of working that has been done in what's called the I. O. T Special Interest group. They have been really investigating what the requirements are for this use case and edge. So now we're really pleased in, um, in our most recent release of really aid relate 00.3. We have put in some key capabilities that we're seeing being driven by these edge use cases. So things like How do you do quick image generation? And that's important because, as you distribute, want that consistency created tailored image, be able to deploy that in a consistent way, allow that to address scale, meet security requirements that you may have also right updates become very important when you start to spread this out. So we put in things in order to allow remote device mirroring so that you can put code into production and then you can schedule it on those remote devices toe happen with the minimal disruption. Things like things like we all know now, right with all this virtual stuff, we often run into things like not ideal bandwidth and sometimes intermittent connectivity with all of those devices out there. So we put in, um, capabilities around, being able to use something called rpm Austria, Um, in order to be able to deliver efficient over the air updates. And then, of course, you got to do intelligent rollbacks for per chance that something goes wrong. How do you come back to a previous state? So it's all about being able to deploy at scale in a distributed way, be ready for that use case and have some predictability and consistency. And again, that's what we build our platforms for. It's all about predictability and consistency, and that gives you flexibility to add your innovation on top. >>I'm glad you mentioned intelligent rollbacks I learned a long time ago. You always ask the question. What happens when something goes wrong? You learn a lot from the answer to that, but You know, we talk a lot about cloud native. Sounds like you're adapting well to become edge native. >>Yeah. I mean, I mean, we're finding whether it's inthe e verticals, right in the very specific use cases or whether it's in sort of an enterprise edge use case. Having consistency brings a ton of flexibility. It was funny, one of our talking with a customer not too long ago. And they said, you know, agility is the new version of efficiency. So it's that having that sort of language be spoken everywhere from your core data center all the way out to the edge that allows you a lot of flexibility going forward. >>So what if you could talk? I mentioned just mentioned Cloud Native. I mean, I think people sometimes just underestimate the effort. It takes tow, make all this stuff run in all the different clouds the engineering efforts required. And I'm wondering what kind of engineering you do with if any with the cloud providers and and, of course, the balance of the ecosystem. But But maybe you could describe that a little bit. >>Yeah, so? So Red Hat works closely with all the major cloud providers you know, whether that's Amazon, Azure, Google or IBM Cloud. Obviously, Andi, we're you know, we're very keen on sort of making sure that we're providing the best environment to run enterprise applications across all those environments, whether you're running it directly just with Lennox on Ralph or whether you're running it in a containerized environment with Open Chef, which which includes route eso eso, our partnership includes work we do upstream, for example. You know, Red Hat help. Google launched the Cuban community, and I've been, you know, with Google. You know, we've been the top two contributors driving that product that project since inception, um, but then also extends into sort of our hosted services. So we run a jointly developed and jointly managed service called the Azure Red Hat Open Shift Service. Together with Microsoft were our joint customers can get access to open shift in an azure environment as a native azure service, meaning it's, you know, it's fully integrated, just like any other. As your service you can tied into as you're building and so forth. It's sold by by Azure Microsoft's sales reps. Um, but you know, we get the benefit of working together with our Microsoft counterparts and developing that service in managing that service and then in supporting our joint customers. We over the summer announced sort of a similar partnership with Amazon and we'll be launching are already doing pilots on the Amazon Red Hat Open ship service, which is which is, you know, the same concept now applied to the AWS cloud. So that will be coming out g a later this year, right? But again, whether it's working upstream or whether it's, you know, partnering on managed services. I know Stephanie team also do a lot of work with Microsoft, for example, on sequel server on Lenox dot net on Lenox. Whoever thought be running that applications on Linux. But that's, you know, a couple of years old now, a few years old, So eso again. It's been a great partnership, not just with Microsoft, but with all the cloud providers. >>So I think you just shared a little little He showed a little leg there, Joe, what's what's coming g A. Later this year. I want to circle back to >>that. Yeah, eso we way announced a preview earlier this year of of the Amazon Red Hat Open ships service. It's not generally available yet. We're you know, we're taking customers. We want toe, sort of be early access, get access to pilots and then that'll be generally available later this year. Although Red Hat does manage our own service Open ship dedicated that's available on AWS today. But that's a service that's, you know, solely, uh, operated by Red Hat. This new service will be jointly operated by Red Hat and Amazon together Idea. That would be sort of a service that we are delivering together as partners >>as a managed service and and okay, so that's in beta now. I presume if it's gonna be g a little, it's >>like, Yeah, that's yeah, >>that's probably running on bare metal. I would imagine that >>one is running >>on E. C. Two. That's running an A W C C T V exactly, and >>run again. You know, all of our all of >>our I mean, we you know, that open shift does offer bare metal cloud, and we do you know, we do have customers who can take the open shift software and deploy it there right now are managed. Offering is running on top of the C two and on top of Azure VM. But again, this is this is appealing to customers who, you know, like what we bring in terms of an enterprise kubernetes platform, but don't wanna, you know, operated themselves, right? So it's a fully managed service. You just come and build and deploy your APS, and then we manage all of the infrastructure and all the underlying platform for you >>that's going to explode. My prediction. Um, let's take an example of heart example of security. And I'm interested in how you guys ensure a consistent, you know, security experience across all these locations on Prem Cloud. Multiple clouds, the edge. Maybe you could talk about that. And Stephanie, I'm sure you have a perspective on this is Well, from the standpoint of of Ralph. So who wants to start? >>Yeah, Maybe I could start from the bottom and then I'll pass it over to Joe to talk a bit. I think one of these aspects about security it's clearly top of mind of all customers. Um, it does start with the very bottom and base selection in your OS. We continue to drive SC Lennox capabilities into rural to provide that foundational layer. And then as we run real core OS and open shift, we bring over that s C Lennox capability as well. Um, but, you know, there's a whole lot of ways to tackle this we've done. We've done a lot around our policies around, um see ve updates, etcetera around rail to make sure that we are continuing to provide on DCA mitt too. Mitigating all critical and importance, providing better transparency toe how we assess those CVS. So security is certainly top of mind for us. And then as we move forward, right there's also and joke and talk about the security work we do is also capabilities to do that in container ization. But you know, we we work. We work all the way from the base to doing things like these images in these easy to build images, which are tailored so you can make them smaller, less surface area for security. Security is one of those things. That's a lifestyle, right? You gotta look at it from all the way the base in the operating system, with things like sc Lennox toe how you build your images, which now we've added new capabilities. There And then, of course, in containers. There's, um there's a whole focus in the open shift area around container container security, >>Joe. Anything you want to add to that? >>Yeah, sure. I >>mean, I think, you know, obviously, Lennox is the foundation for, you know, for all public clouds. It's it's driving enterprise applications in the data center, part of keeping those applications. Security is keeping them up to date And, you know, through, you know, through real, we provide, you know, securing up to date foundation as a Stephanie mentioned as you move into open shift, you're also been able to take advantage of, uh, Thio to take advantage of essentially mutability. Right? So now the application that you're deploying isn't immutable unit that you build once as a container image, and then you deploy that out all your various environments. When you have to do an update, you don't go and update all those environments. You build a new image that includes those updates, and then you deploy those images out rolling fashion and, as you mentioned that you could go back if there's issues. So the idea, the notion of immutable application deployments has a lot to do with security, and it's enabled by containers. And then, obviously you have cured Panetti's and, you know, and all the rest of our capabilities as part of open Shift managing that for you. We've extended that concept to the entire platform. So Stephanie mentioned, real core West Open shift has always run on real. What we have done in open shift for is we've taken an immutable version of Ralph. So it's the same red hat enterprise Lennox that we've had for years. But now, in this latest version relate, we have a new way to package and deploy it as a relic or OS image, and then that becomes part of the platform. So when customers want toe in addition to keeping their applications up to date, they need to keep their platform up to dates. Need to keep, you know, up with the latest kubernetes patches up with the latest Lennox packages. What we're doing is delivering that as one platform, so when you get updates for open shift, they could include updates for kubernetes. They could include updates for Lennox itself as well as all the integrated services and again, all of this is just you know this is how you keep your applications secure. Is making sure your you know, taking care of that hygiene of, you know, managing your vulnerabilities, keeping everything patched in up to date and ultimately ensuring security for your application and users. >>I know I'm going a little bit over, but I have I have one question that I wanna ask you guys and a broad question about maybe a trends you see in the business. I mean, you look at what we talk a lot about cloud native, and you look at kubernetes and the interest in kubernetes off the charts. It's an area that has a lot of spending momentum. People are putting resource is behind it. But you know, really, to build these sort of modern applications, it's considered state of the art on. Do you see a lot of people trying to really bring that modern approach toe any cloud we've been talking about? EJ. You wanna bring it also on Prem And people generally associate this notion of cloud native with this kind of elite developers, right? But you're bringing it to the masses and there's 20 million plus software developers out there, and most you know, with all due respect that you know they may not be the the the elites of the elite. So how are you seeing this evolve in terms of re Skilling people to be able, handle and take advantage of all this? You know, cool new stuff that's coming out. >>Yeah, I can start, you know, open shift. Our focus from the beginning has been bringing kubernetes to the enterprise. So we think of open shift as the dominant enterprise kubernetes platform enterprises come in all shapes and sizes and and skill sets. As you mentioned, they have unique requirements in terms of how they need toe run stuff in their data center and then also bring that to production, whether it's in the data center across the public clouds eso So part of it is, you know, making sure that the technology meets the requirements and then part of it is working. The people process and and culture thio make them help them understand what it means to sort of take advantage of container ization and cloud native platforms and communities. Of course, this is nothing new to red hat, right? This is what we did 20 years ago when we first brought Lennox to the Enterprise with well, right on. In essence, Carozza is basically distributed. Lennox right Kubernetes builds on Lennox and brings it out to your cluster to your distributed systems on across the hybrid cloud. So So nothing new for Red Hat. But a lot of the same challenges apply to this new cloud native world. >>Awesome. Stephanie, we'll give you the last word, >>all right? And I think just a touch on what Joe talked about it. And Joe and I worked really closely on this, right? The ability to run containers right is someone launches down this because it is magical. What could be done with deploying applications? Using a container technology, we built the capabilities and the tools directly into rural in order to be able to build and deploy, leveraging things like pod man directly into rural. And that's exactly so, folks. Everyone who has a real subscription today can start on their container journey, start to build and deploy that, and then we work to help those skills then be transferrable as you movinto open shift in kubernetes and orchestration. So, you know, we work very closely to make sure that the skills building can be done directly on rail and then transfer into open shift. Because, as Joe said, at the end of the day, it's just a different way to deploy. Lennox, >>You guys are doing some good work. Keep it up. And thanks so much for coming back in. The Cube is great to talk to you today. >>Good to see you, Dave. >>Yes, Thank you. >>All right. Thank you for watching everybody. The cubes coverage of Cuba con en a continues right after this.
SUMMARY :
Native Con North America 2020 Virtual brought to you by Red Hat The Cloud, It's great to be here, Dave. Hey, so we all talked, you know, recently, uh, answerable fest Seems like a We have been focused on this open hybrid cloud strategy for, you know, You guys got hard news here that you could maybe talk about 46? Open shift provides an abstraction, you know, you know, you know what? And, you know, those are some of the challenges is so the rial differentiation between what you bring in what I can just if I'm in a mono cloud, You know, Read has been talking about the hybrid cloud for a long time. And and so you start to see EJ becoming part of that hybrid cloud picture on Stephanie would come back to you here, so I mean, we talk about rail a lot because your business and that gives you flexibility to add your innovation on top. You learn a lot from the answer to that, And they said, you know, So what if you could talk? So Red Hat works closely with all the major cloud providers you know, whether that's Amazon, So I think you just shared a little little He showed a little leg there, Joe, what's what's coming g A. But that's a service that's, you know, solely, uh, operated by Red Hat. as a managed service and and okay, so that's in beta now. I would imagine that You know, all of our all of But again, this is this is appealing to customers who, you know, like what we bring in terms of And I'm interested in how you guys ensure a consistent, you know, security experience across all these But you know, we we work. I Need to keep, you know, up with the latest kubernetes patches up But you know, really, to build these sort of modern applications, eso So part of it is, you know, making sure that the technology meets the requirements Stephanie, we'll give you the last word, So, you know, we work very closely to make sure that the skills building can be done directly on The Cube is great to talk to you today. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stephanie | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Joe Fernandez | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Lenox | ORGANIZATION | 0.99+ |
Joe Fernandes | PERSON | 0.99+ |
tens | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Cheras | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
C two | TITLE | 0.99+ |
Lennox | PERSON | 0.99+ |
one question | QUANTITY | 0.99+ |
Ecosystem Partners | ORGANIZATION | 0.99+ |
Leighton | ORGANIZATION | 0.98+ |
two things | QUANTITY | 0.98+ |
Ford | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one platform | QUANTITY | 0.98+ |
Read | PERSON | 0.98+ |
Red Hat Enterprise | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
Azure | ORGANIZATION | 0.97+ |
20 years ago | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
later this year | DATE | 0.97+ |
Andi | PERSON | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
DCA | ORGANIZATION | 0.96+ |
one basket | QUANTITY | 0.95+ |
Linux | TITLE | 0.95+ |
earlier this year | DATE | 0.95+ |
single stack | QUANTITY | 0.94+ |
Later this year | DATE | 0.92+ |
The Spaceborne Computer | Exascale Day
>> Narrator: From around the globe. It's theCUBE with digital coverage of Exascale Day. Made possible by Hewlett Packard Enterprise. >> Welcome everyone to theCUBE's celebration of Exascale Day. Dr. Mark Fernandez is here. He's the HPC technology officer for the Americas at Hewlett Packard enterprise. And he's a developer of the spaceborne computer, which we're going to talk about today. Mark, welcome. It's great to see you. >> Great to be here. Thanks for having me. >> You're very welcome. So let's start with Exascale Day. It's on 10 18, of course, which is 10 to the power of 18. That's a one followed by 18 zeros. I joke all the time. It takes six commas to write out that number. (Mark laughing) But Mark, why don't we start? What's the significance of that number? >> So it's a very large number. And in general, we've been marking the progress of our computational capabilities in thousands. So exascale is a thousand times faster than where we are today. We're in an era today called the petaflop era which is 10 to the 15th. And prior to that, we were in the teraflop era, which is 10 to the 12th. I can kind of understand a 10 to the 12th and I kind of can discuss that with folks 'cause that's a trillion of something. And we know a lot of things that are in trillions, like our national debt, for example. (Dave laughing) But a billion, billion is an exascale and it will give us a thousand times more computational capability than we have in general today. >> Yeah, so when you think about going from terascale to petascale to exascale I mean, we're not talking about orders of magnitude, we're talking about a much more substantial improvement. And that's part of the reason why it's sort of takes so long to achieve these milestones. I mean, it kind of started back in the sixties and seventies and then... >> Yeah. >> We've been in the petascale now for more than a decade if I think I'm correct. >> Yeah, correct. We got there in 2007. And each of these increments is an extra comma, that's the way to remember it. So we want to add an extra comma and get to the exascale era. So yeah, like you say, we entered the current petaflop scale in 2007. Before that was the terascale, teraflop era and it was in 1997. So it took us 10 years to get that far, but it's taken us, going to take us 13 or 14 years to get to the next one. >> And we say flops, we're talking about floating point operations. And we're talking about the number of calculations that can be done in a second. I mean, talk about not being able to get your head around it, right? Is that's what talking about here? >> Correct scientists, engineers, weather forecasters, others use real numbers and real math. And that's how you want to rank those performance is based upon those real numbers times each other. And so that's why they're floating point numbers. >> When I think about supercomputers, I can't help but remember whom I consider the father of supercomputing Seymour Cray. Cray of course, is a company that Hewlett Packard Enterprise acquired. And he was kind of an eclectic fellow. I mean, maybe that's unfair but he was an interesting dude. But very committed to his goal of really building the world's fastest computers. When you look at back on the industry, how do you think about its developments over the years? >> So one of the events that stands out in my mind is I was working for the Naval Research Lab outside Stennis Space Center in Mississippi. And we were doing weather modeling. And we got a Cray supercomputer. And there was a party when we were able to run a two week prediction in under two weeks. So the scientists and engineers had the math to solve the problem, but the current computers would take longer than just sitting and waiting and looking out the window to see what the weather was like. So when we can make a two week prediction in under two weeks, there was a celebration. And that was in the eighties, early nineties. And so now you see that we get weather predictions in eight hours, four hours and your morning folks will get you down to an hour. >> I mean, if you think about the history of super computing it's really striking to consider the challenges in the efforts as we were just talking about, I mean, decade plus to get to the next level. And you see this coming to fruition now, and we're saying exascale likely 2021. So what are some of the innovations in science, in medicine or other areas you mentioned weather that'll be introduced as exascale computing is ushered in, what should people expect? >> So we kind of alluded to one and weather affects everybody, everywhere. So we can get better weather predictions, which help everybody every morning before you get ready to go to work or travel or et cetera. And again, storm predictions, hurricane predictions, flood predictions, the forest fire predictions, those type things affect everybody, everyday. Those will get improved with exascale. In terms of medicine, we're able to take, excuse me, we're able to take genetic information and attempt to map that to more drugs quicker than we have in the past. So we'll be able to have drug discovery happening much faster with an exascale system out there. And to some extent that's happening now with COVID and all the work that we're doing now. And we realize that we're struggling with these current computers to find these solutions as fast as everyone wants them. And exascale computers will help us get there much faster in the future in terms of medicine. >> Well, and of course, as you apply machine intelligence and AI and machine learning to the applications running on these supercomputers, that just takes it to another level. I mean, people used to joke about you can't predict the weather and clearly we've seen that get much, much better. Now it's going to be interesting to see with climate change. That's another wildcard variable but I'm assuming the scientists are taking that into consideration. I mean, actually been pretty accurate about the impacts of climate change, haven't they? >> Yeah, absolutely. And the climate change models will get better with exascale computers too. And hopefully we'll be able to build a confidence in the public and the politicians in those results with these better, more powerful computers. >> Yeah let's hope so. Now let's talk about the spaceborne computer and your involvement in that project. Your original spaceborne computer it went up on a SpaceX reusable rocket. Destination of course, was the international space station. Okay, so what was the genesis of that project and what was the outcome? So we were approached by a long time customer NASA Ames. And NASA Ames says its mission is to model rocket launches and space missions and return to earth. And they had the foresight to realize that their supercomputers here on earth, could not do that mission when we got to Mars. And so they wanted to plan ahead and they said, "Can you take a small part of our supercomputer today and just prove that it can work in space? And if it can't figure out what we need to do to make it work, et cetera." So that's what we did. We took identical hardware, that's present at NASA Ames. We put it on a SpaceX rocket no special preparations for it in terms of hardware or anything of that sort, no special hardening, because we want to take the latest technology just before we head to Mars with us. I tell people you wouldn't want to get in the rocket headed to Mars with a flip phone. You want to take the latest iPhone, right? And all of the computers on board, current spacecrafts are about the 2007 era that we were talking about, in that era. So we want to take something new with us. We got the spaceone computer on board. It was installed in the ceiling because in space, there's no gravity. And you can put computers in the ceiling. And we immediately made a computer run. And we produced a trillion calculations a second which got us into the teraflop range. The first teraflop in space was pretty exciting. >> Well, that's awesome. I mean, so this is the ultimate example of edge computing. >> Yes. You mentioned you wanted to see if it could work and it sounds like it did. I mean, there was obviously a long elapse time to get it up and running 'cause you have to get it up there. But it sounds like once you did, it was up and running very quickly so it did work. But what were some of the challenges that you encountered maybe some of the learnings in terms of getting it up and running? >> So it's really fascinating. Astronauts are really cool people but they're not computer scientists, right? So they see a cord, they see a place to plug it in, they plug it in and of course we're watching live on the video and you plugged it in the wrong spot. So (laughs) Mr. Astronaut, can we back up and follow the procedure more carefully and get this thing plugged in carefully. They're not computer technicians used to installing a supercomputer. So we were able to get the system packaged for the shake, rattle and roll and G-forces of launch in the SpaceX. We were able to give astronaut instructions on how to install it and get it going. And we were able to operate it here from earth and get some pretty exciting results. >> So our supercomputers are so easy to install even an astronaut can do it. I don't know. >> That's right. (both laughing) Here on earth we have what we call a customer replaceable units. And we had to replace a component. And we looked at our instructions that are tried and true here on earth for average Joe, a customer to do that and realized without gravity, we're going to have to update this procedure. And so we renamed it an astronaut replaceable unit and it worked just fine. >> Yeah, you can't really send an SE out to space to fix it, can you? >> No sir. (Dave laughing) You have to have very careful instructions for these guys but they're great. It worked out wonderfully. >> That's awesome. Let's talk about spaceborne two. Now that's on schedule to go back to the ISS next year. What are you trying to accomplish this time? >> So in retrospect, spaceborne one was a proof of concept. Can we package it up to fit on SpaceX? Can we get the astronauts to install it? And can we operate it from earth? And if so, how long will it last? And do we get the right answers? 100% mission success on that. Now spaceborne two is, we're going to release it to the community of scientists, engineers and space explorers and say, "Hey this thing is rock solid, it's proven. Come use it to improve your edge computing." We'd like to preserve the network downlink bandwidth for all that imagery, all that genetic data, all that other data and process it on the edge as the whole world is moving to now. Don't move the data, let's compute at the edge and that's what we're going to do with spaceborne two. And so what's your expectation for how long the project is going to last? What does success look like in your mind? So spaceborne one was given a one year mission just to see if we could do it but the idea then was planted it's going to take about three years to get to Mars and back. So if you're successful, let's see if this computer can last three years. And so we're going up February 1st, if we go on schedule and we'll be up two to three years and as long as it works, we'll keep computing and computing on the edge. >> That's amazing. I mean, I feel like, when I started the industry, it was almost like there was a renaissance in supercomputing. You certainly had Cray and you had all these other companies, you remember thinking machines and convex spun out tried to do a mini supercomputer. And you had, really a lot of venture capital and then things got quiet for a while. I feel like now with all this big data and AI, we're seeing in all the use cases that you talked about, we're seeing another renaissance in supercomputing. I wonder if you could give us your final thoughts. >> Yeah, absolutely. So we've got the generic like you said, floating point operations. We've now got specialized image processing processors and we have specialized graphics processing units, GPUs. So all of the scientists and engineers are looking at these specialized components and bringing them together to solve their missions at the edge faster than ever before. So there's heterogeneity of computing is coming together to make humanity a better place. And how are you going to celebrate Exascale Day? You got to special cocktail you going to shake up or what are you going to do? It's five o'clock somewhere on 10 18, and I'm a Parrothead fan. So I'll probably have a margarita. There you go all right. Well Mark, thanks so much for sharing your thoughts on Exascale Day. Congratulations on your next project, the spaceborne two. Really appreciate you coming to theCUBE. Thank you very much I've enjoyed it. All right, you're really welcome. And thank you for watching everybody. Keep it right there. This is Dave Vellante for thecUBE. We're celebrating Exascale Day. We'll be right back. (upbeat music)
SUMMARY :
Narrator: From around the globe. And he's a developer of Great to be here. I joke all the time. And prior to that, we And that's part of the reason why We've been in the petascale and get to the exascale era. And we say flops, And that's how you want And he was kind of an eclectic fellow. had the math to solve the problem, in the efforts as we And to some extent that's that just takes it to another level. And the climate change And all of the computers on board, I mean, so this is the ultimate to see if it could work on the video and you plugged are so easy to install And so we renamed it an You have to have very careful instructions Now that's on schedule to go for how long the project is going to last? And you had, really a So all of the scientists and engineers
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
1997 | DATE | 0.99+ |
February 1st | DATE | 0.99+ |
Mars | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
Mark Fernandez | PERSON | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
13 | QUANTITY | 0.99+ |
Seymour Cray | PERSON | 0.99+ |
Naval Research Lab | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
14 years | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
earth | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
eight hours | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Cray | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Mississippi | LOCATION | 0.99+ |
two week | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Exascale Day | EVENT | 0.99+ |
thousands | QUANTITY | 0.99+ |
SpaceX | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
10 18 | DATE | 0.98+ |
six commas | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
an hour | QUANTITY | 0.98+ |
early nineties | DATE | 0.98+ |
Joe | PERSON | 0.98+ |
five o'clock | DATE | 0.98+ |
under two weeks | QUANTITY | 0.98+ |
18 | QUANTITY | 0.98+ |
12th | DATE | 0.98+ |
more than a decade | QUANTITY | 0.98+ |
15th | DATE | 0.97+ |
eighties | DATE | 0.97+ |
spaceborne two | TITLE | 0.96+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.95+ |
sixties | DATE | 0.94+ |
2021 | DATE | 0.94+ |
three years | QUANTITY | 0.94+ |
about three years | QUANTITY | 0.94+ |
Americas | LOCATION | 0.94+ |
both | QUANTITY | 0.93+ |
NASA Ames | ORGANIZATION | 0.92+ |
18 zeros | QUANTITY | 0.92+ |
trillions | QUANTITY | 0.9+ |
teraflop era | DATE | 0.89+ |
thousand times | QUANTITY | 0.86+ |
spaceborne two | ORGANIZATION | 0.85+ |
Stennis Space Center | LOCATION | 0.84+ |
one of the events | QUANTITY | 0.84+ |
COVID | OTHER | 0.83+ |
a trillion calculations | QUANTITY | 0.82+ |
billion, billion | QUANTITY | 0.8+ |
first teraflop | QUANTITY | 0.79+ |
one | QUANTITY | 0.79+ |
ISS | EVENT | 0.71+ |
Cray | ORGANIZATION | 0.71+ |
Computer Science & Space Exploration | Exascale Day
>>from around the globe. It's the Q. With digital coverage >>of exa scale day made possible by Hewlett Packard Enterprise. We're back at the celebration of Exa Scale Day. This is Dave Volant, and I'm pleased to welcome to great guests Brian Dance Berries Here. Here's what The ISS Program Science office at the Johnson Space Center. And Dr Mark Fernandez is back. He's the Americas HPC technology officer at Hewlett Packard Enterprise. Gentlemen, welcome. >>Thank you. Yeah, >>well, thanks for coming on. And, Mark, Good to see you again. And, Brian, I wonder if we could start with you and talk a little bit about your role. A T. I s s program Science office as a scientist. What's happening these days? What are you working on? >>Well, it's been my privilege the last few years to be working in the, uh, research integration area of of the space station office. And that's where we're looking at all of the different sponsors NASA, the other international partners, all the sponsors within NASA, and, uh, prioritizing what research gets to go up to station. What research gets conducted in that regard. And to give you a feel for the magnitude of the task, but we're coming up now on November 2nd for the 20th anniversary of continuous human presence on station. So we've been a space faring society now for coming up on 20 years, and I would like to point out because, you know, as an old guy myself, it impresses me. That's, you know, that's 25% of the US population. Everybody under the age of 20 has never had a moment when they were alive and we didn't have people living and working in space. So Okay, I got off on a tangent there. We'll move on in that 20 years we've done 3000 experiments on station and the station has really made ah, miraculously sort of evolution from, ah, basic platform, what is now really fully functioning national lab up there with, um, commercially run research facilities all the time. I think you can think of it as the world's largest satellite bus. We have, you know, four or five instruments looking down, measuring all kinds of things in the atmosphere during Earth observation data, looking out, doing astrophysics, research, measuring cosmic rays, X ray observatory, all kinds of things, plus inside the station you've got racks and racks of experiments going on typically scores, you know, if not more than 50 experiments going on at any one time. So, you know, the topic of this event is really important. Doesn't NASA, you know, data transmission Up and down, all of the cameras going on on on station the experiments. Um, you know, one of one of those astrophysics observatory's you know, it has collected over 15 billion um uh, impact data of cosmic rays. And so the massive amounts of data that that needs to be collected and transferred for all of these experiments to go on really hits to the core. And I'm glad I'm able toe be here and and speak with you today on this. This topic. >>Well, thank you for that, Bryan. A baby boomer, right? Grew up with the national pride of the moon landing. And of course, we've we've seen we saw the space shuttle. We've seen international collaboration, and it's just always been something, you know, part of our lives. So thank you for the great work that you guys were doing their mark. You and I had a great discussion about exa scale and kind of what it means for society and some of the innovations that we could maybe expect over the coming years. Now I wonder if you could talk about some of the collaboration between what you guys were doing and Brian's team. >>Uh, yeah, so yes, indeed. Thank you for having me early. Appreciate it. That was a great introduction. Brian, Uh, I'm the principal investigator on Space Born computer, too. And as the two implies, where there was one before it. And so we worked with Bryant and his team extensively over the past few years again high performance computing on board the International Space Station. Brian mentioned the thousands of experiments that have been done to date and that there are currently 50 orm or going on at any one time. And those experiments collect data. And up until recently, you've had to transmit that data down to Earth for processing. And that's a significant amount of bandwidth. Yeah, so with baseball and computer to we're inviting hello developers and others to take advantage of that onboard computational capability you mentioned exa scale. We plan to get the extra scale next year. We're currently in the era that's called PETA scale on. We've been in the past scale era since 2000 and seven, so it's taken us a while to make it that next lead. Well, 10 years after Earth had a PETA scale system in 2017 were able to put ah teraflop system on the International space station to prove that we could do a trillion calculations a second in space. That's where the data is originating. That's where it might be best to process it. So we want to be able to take those capabilities with us. And with H. P. E. Acting as a wonderful partner with Brian and NASA and the space station, we think we're able to do that for many of these experiments. >>It's mind boggling you were talking about. I was talking about the moon landing earlier and the limited power of computing power. Now we've got, you know, water, cool supercomputers in space. I'm interested. I'd love to explore this notion of private industry developing space capable computers. I think it's an interesting model where you have computer companies can repurpose technology that they're selling obviously greater scale for space exploration and apply that supercomputing technology instead of having government fund, proprietary purpose built systems that air. Essentially, you use case, if you will. So, Brian, what are the benefits of that model? The perhaps you wouldn't achieve with governments or maybe contractors, you know, kind of building these proprietary systems. >>Well, first of all, you know, any any tool, your using any, any new technology that has, you know, multiple users is going to mature quicker. You're gonna have, you know, greater features, greater capabilities, you know, not even talking about computers. Anything you're doing. So moving from, you know, governor government is a single, um, you know, user to off the shelf type products gives you that opportunity to have things that have been proven, have the technology is fully matured. Now, what had to happen is we had to mature the space station so that we had a platform where we could test these things and make sure they're gonna work in the high radiation environments, you know, And they're gonna be reliable, because first, you've got to make sure that that safety and reliability or taken care of so that that's that's why in the space program you're gonna you're gonna be behind the times in terms of the computing power of the equipment up there because, first of all and foremost, you needed to make sure that it was reliable and say, Now, my undergraduate degree was in aerospace engineering and what we care about is aerospace engineers is how heavy is it, how big and bulky is it because you know it z expensive? You know, every pound I once visited Gulfstream Aerospace, and they would pay their employees $1000 that they could come up with a way saving £1 in building that aircraft. That means you have more capacity for flying. It's on the orders of magnitude. More important to do that when you're taking payloads to space. So you know, particularly with space born computer, the opportunity there to use software and and check the reliability that way, Uh, without having to make the computer, you know, radiation resistance, if you will, with heavy, you know, bulky, um, packaging to protect it from that radiation is a really important thing, and it's gonna be a huge advantage moving forward as we go to the moon and on to Mars. >>Yeah, that's interesting. I mean, your point about cots commercial off the shelf technology. I mean, that's something that obviously governments have wanted to leverage for a long, long time for many, many decades. But but But Mark the issue was always the is. Brian was just saying the very stringent and difficult requirements of space. Well, you're obviously with space Born one. You got to the point where you had visibility of the economics made sense. It made commercial sense for companies like Hewlett Packard Enterprise. And now we've sort of closed that gap to the point where you're sort of now on that innovation curve. What if you could talk about that a little bit? >>Yeah, absolutely. Brian has some excellent points, you know, he said, anything we do today and requires computers, and that's absolutely correct. So I tell people that when you go to the moon and when you go to the Mars, you probably want to go with the iPhone 10 or 11 and not a flip phone. So before space born was sent up, you went with 2000 early two thousands computing technology there which, like you said many of the people born today weren't even around when the space station began and has been occupied so they don't even know how to program or use that type of computing. Power was based on one. We sent the exact same products that we were shipping to customers today, so they are current state of the art, and we had a mandate. Don't touch the hardware, have all the protection that you can via software. So that's what we've done. We've got several philosophical ways to do that. We've implemented those in software. They've been successful improving in the space for one, and now it's space born to. We're going to begin the experiments so that the rest of the community so that the rest of the community can figure out that it is economically viable, and it will accelerate their research and progress in space. I'm most excited about that. Every venture into space as Brian mentioned will require some computational capability, and HP has figured out that the economics air there we need to bring the customers through space ball into in order for them to learn that we are reliable but current state of the art, and that we could benefit them and all of humanity. >>Guys, I wanna ask you kind of a two part question. And, Brian, I'll start with you and it z somewhat philosophical. Uh, I mean, my understanding was and I want to say this was probably around the time of the Bush administration w two on and maybe certainly before that, but as technology progress, there was a debate about all right, Should we put our resource is on moon because of the proximity to Earth? Or should we, you know, go where no man has gone before and or woman and get to Mars? Where What's the thinking today, Brian? On that? That balance between Moon and Mars? >>Well, you know, our plans today are are to get back to the moon by 2024. That's the Artemus program. Uh, it's exciting. It makes sense from, you know, an engineering standpoint. You take, you know, you take baby steps as you continue to move forward. And so you have that opportunity, um, to to learn while you're still, you know, relatively close to home. You can get there in days, not months. If you're going to Mars, for example, toe have everything line up properly. You're looking at a multi year mission you know, it may take you nine months to get there. Then you have to wait for the Earth and Mars to get back in the right position to come back on that same kind of trajectory. So you have toe be there for more than a year before you can turn around and come back. So, you know, he was talking about the computing power. You know, right now that the beautiful thing about the space station is, it's right there. It's it's orbiting above us. It's only 250 miles away. Uh, so you can test out all of these technologies. You can rely on the ground to keep track of systems. There's not that much of a delay in terms of telemetry coming back. But as you get to the moon and then definitely is, you get get out to Mars. You know, there are enough minutes delay out there that you've got to take the computing power with you. You've got to take everything you need to be able to make those decisions you need to make because there's not time to, um, you know, get that information back on the ground, get back get it back to Earth, have people analyze the situation and then tell you what the next step is to do. That may be too late. So you've got to think the computing power with you. >>So extra scale bring some new possibilities. Both both for, you know, the moon and Mars. I know Space Born one did some simulations relative. Tomorrow we'll talk about that. But But, Brian, what are the things that you hope to get out of excess scale computing that maybe you couldn't do with previous generations? >>Well, you know, you know, market on a key point. You know, bandwidth up and down is, of course, always a limitation. In the more computing data analysis you can do on site, the more efficient you could be with parsing out that that bandwidth and to give you ah, feel for just that kind of think about those those observatory's earth observing and an astronomical I was talking about collecting data. Think about the hours of video that are being recorded daily as the astronauts work on various things to document what they're doing. They many of the biological experiments, one of the key key pieces of data that's coming back. Is that video of the the microbes growing or the plants growing or whatever fluid physics experiments going on? We do a lot of colloids research, which is suspended particles inside ah liquid. And that, of course, high speed video. Is he Thio doing that kind of research? Right now? We've got something called the I s s experience going on in there, which is basically recording and will eventually put out a syriza of basically a movie on virtual reality recording. That kind of data is so huge when you have a 360 degree camera up there recording all of that data, great virtual reality, they There's still a lot of times bringing that back on higher hard drives when the space six vehicles come back to the Earth. That's a lot of data going on. We recorded videos all the time, tremendous amount of bandwidth going on. And as you get to the moon and as you get further out, you can a man imagine how much more limiting that bandwidth it. >>Yeah, We used to joke in the old mainframe days that the fastest way to get data from point a to Point B was called C Tam, the Chevy truck access method. Just load >>up a >>truck, whatever it was, tapes or hard drive. So eso and mark, of course space born to was coming on. Spaceport one really was a pilot, but it proved that the commercial computers could actually work for long durations in space, and the economics were feasible. Thinking about, you know, future missions and space born to What are you hoping to accomplish? >>I'm hoping to bring. I'm hoping to bring that success from space born one to the rest of the community with space born to so that they can realize they can do. They're processing at the edge. The purpose of exploration is insight, not data collection. So all of these experiments begin with data collection. Whether that's videos or samples are mold growing, etcetera, collecting that data, we must process it to turn it into information and insight. And the faster we can do that, the faster we get. Our results and the better things are. I often talk Thio College in high school and sometimes grammar school students about this need to process at the edge and how the communication issues can prevent you from doing that. For example, many of us remember the communications with the moon. The moon is about 250,000 miles away, if I remember correctly, and the speed of light is 186,000 miles a second. So even if the speed of light it takes more than a second for the communications to get to the moon and back. So I can remember being stressed out when Houston will to make a statement. And we were wondering if the astronauts could answer Well, they answered as soon as possible. But that 1 to 2 second delay that was natural was what drove us crazy, which made us nervous. We were worried about them in the success of the mission. So Mars is millions of miles away. So flip it around. If you're a Mars explorer and you look out the window and there's a big red cloud coming at you that looks like a tornado and you might want to do some Mars dust storm modeling right then and there to figure out what's the safest thing to do. You don't have the time literally get that back to earth have been processing and get you the answer back. You've got to take those computational capabilities with you. And we're hoping that of these 52 thousands of experiments that are on board, the SS can show that in order to better accomplish their missions on the moon. And Omar, >>I'm so glad you brought that up because I was gonna ask you guys in the commercial world everybody talks about real time. Of course, we talk about the real time edge and AI influencing and and the time value of data I was gonna ask, you know, the real time, Nous, How do you handle that? I think Mark, you just answered that. But at the same time, people will say, you know, the commercial would like, for instance, in advertising. You know, the joke the best. It's not kind of a joke, but the best minds of our generation tryingto get people to click on ads. And it's somewhat true, unfortunately, but at any rate, the value of data diminishes over time. I would imagine in space exploration where where you're dealing and things like light years, that actually there's quite a bit of value in the historical data. But, Mark, you just You just gave a great example of where you need real time, compute capabilities on the ground. But but But, Brian, I wonder if I could ask you the value of this historic historical data, as you just described collecting so much data. Are you? Do you see that the value of that data actually persists over time, you could go back with better modeling and better a i and computing and actually learn from all that data. What are your thoughts on that, Brian? >>Definitely. I think the answer is yes to that. And, you know, as part of the evolution from from basically a platform to a station, we're also learning to make use of the experiments in the data that we have there. NASA has set up. Um, you know, unopened data access sites for some of our physical science experiments that taking place there and and gene lab for looking at some of the biological genomic experiments that have gone on. And I've seen papers already beginning to be generated not from the original experimenters and principal investigators, but from that data set that has been collected. And, you know, when you're sending something up to space and it to the space station and volume for cargo is so limited, you want to get the most you can out of that. So you you want to be is efficient as possible. And one of the ways you do that is you collect. You take these earth observing, uh, instruments. Then you take that data. And, sure, the principal investigators air using it for the key thing that they designed it for. But if that data is available, others will come along and make use of it in different ways. >>Yeah, So I wanna remind the audience and these these these air supercomputers, the space born computers, they're they're solar powered, obviously, and and they're mounted overhead, right? Is that is that correct? >>Yeah. Yes. Space borne computer was mounted in the overhead. I jokingly say that as soon as someone could figure out how to get a data center in orbit, they will have a 50 per cent denser data station that we could have down here instead of two robes side by side. You can also have one overhead on. The power is free. If you can drive it off a solar, and the cooling is free because it's pretty cold out there in space, so it's gonna be very efficient. Uh, space borne computer is the most energy efficient computer in existence. Uh, free electricity and free cooling. And now we're offering free cycles through all the experimenters on goal >>Eso Space born one exceeded its mission timeframe. You were able to run as it was mentioned before some simulations for future Mars missions. And, um and you talked a little bit about what you want to get out of, uh, space born to. I mean, are there other, like, wish list items, bucket bucket list items that people are talking about? >>Yeah, two of them. And these air kind of hypothetical. And Brian kind of alluded to them. Uh, one is having the data on board. So an example that halo developers talk to us about is Hey, I'm on Mars and I see this mold growing on my potatoes. That's not good. So let me let me sample that mold, do a gene sequencing, and then I've got stored all the historical data on space borne computer of all the bad molds out there and let me do a comparison right then and there before I have dinner with my fried potato. So that's that's one. That's very interesting. A second one closely related to it is we have offered up the storage on space borne computer to for all of your raw data that we process. So, Mr Scientist, if if you need the raw data and you need it now, of course, you can have it sent down. But if you don't let us just hold it there as long as they have space. And when we returned to Earth like you mentioned, Patrick will ship that solid state disk back to them so they could have a new person, but again, reserving that network bandwidth, uh, keeping all that raw data available for the entire duration of the mission so that it may have value later on. >>Great. Thank you for that. I want to end on just sort of talking about come back to the collaboration between I S s National Labs and Hewlett Packard Enterprise, and you've got your inviting project ideas using space Bourne to during the upcoming mission. Maybe you could talk about what that's about, and we have A We have a graphic we're gonna put up on DSM information that you can you can access. But please, mark share with us what you're planning there. >>So again, the collaboration has been outstanding. There. There's been a mention off How much savings is, uh, if you can reduce the weight by a pound. Well, our partners ice s national lab and NASA have taken on that cost of delivering baseball in computer to the international space station as part of their collaboration and powering and cooling us and giving us the technical support in return on our side, we're offering up space borne computer to for all the onboard experiments and all those that think they might be wanting doing experiments on space born on the S s in the future to take advantage of that. So we're very, very excited about that. >>Yeah, and you could go toe just email space born at hp dot com on just float some ideas. I'm sure at some point there'll be a website so you can email them or you can email me david dot volonte at at silicon angle dot com and I'll shoot you that that email one or that website once we get it. But, Brian, I wanna end with you. You've been so gracious with your time. Uh, yeah. Give us your final thoughts on on exa scale. Maybe how you're celebrating exa scale day? I was joking with Mark. Maybe we got a special exa scale drink for 10. 18 but, uh, what's your final thoughts, Brian? >>Uh, I'm going to digress just a little bit. I think I think I have a unique perspective to celebrate eggs a scale day because as an undergraduate student, I was interning at Langley Research Center in the wind tunnels and the wind tunnel. I was then, um, they they were very excited that they had a new state of the art giant room size computer to take that data we way worked on unsteady, um, aerodynamic forces. So you need a lot of computation, and you need to be ableto take data at a high bandwidth. To be able to do that, they'd always, you know, run their their wind tunnel for four or five hours. Almost the whole shift. Like that data and maybe a week later, been ableto look at the data to decide if they got what they were looking for? Well, at the time in the in the early eighties, this is definitely the before times that I got there. They had they had that computer in place. Yes, it was a punchcard computer. It was the one time in my life I got to put my hands on the punch cards and was told not to drop them there. Any trouble if I did that. But I was able thio immediately after, uh, actually, during their run, take that data, reduce it down, grabbed my colored pencils and graph paper and graph out coefficient lift coefficient of drag. Other things that they were measuring. Take it back to them. And they were so excited to have data two hours after they had taken it analyzed and looked at it just pickled them. Think that they could make decisions now on what they wanted to do for their next run. Well, we've come a long way since then. You know, extra scale day really, really emphasizes that point, you know? So it really brings it home to me. Yeah. >>Please, no, please carry on. >>Well, I was just gonna say, you know, you talked about the opportunities that that space borne computer provides and and Mark mentioned our colleagues at the I S s national lab. You know, um, the space station has been declared a national laboratory, and so about half of the, uh, capabilities we have for doing research is a portion to the national lab so that commercial entities so that HP can can do these sorts of projects and universities can access station and and other government agencies. And then NASA can focus in on those things we want to do purely to push our exploration programs. So the opportunities to take advantage of that are there marks opening up the door for a lot of opportunities. But others can just Google S s national laboratory and find some information on how to get in the way. Mark did originally using s national lab to maybe get a good experiment up there. >>Well, it's just astounding to see the progress that this industry is made when you go back and look, you know, the early days of supercomputing to imagine that they actually can be space born is just tremendous. Not only the impacts that it can have on Space six exploration, but also society in general. Mark Wayne talked about that. Guys, thanks so much for coming on the Cube and celebrating Exa scale day and helping expand the community. Great work. And, uh, thank you very much for all that you guys dio >>Thank you very much for having me on and everybody out there. Let's get the XO scale as quick as we can. Appreciate everything you all are >>doing. Let's do it. >>I've got a I've got a similar story. Humanity saw the first trillion calculations per second. Like I said in 1997. And it was over 100 racks of computer equipment. Well, space borne one is less than fourth of Iraq in only 20 years. So I'm gonna be celebrating exa scale day in anticipation off exa scale computers on earth and soon following within the national lab that exists in 20 plus years And being on Mars. >>That's awesome. That mark. Thank you for that. And and thank you for watching everybody. We're celebrating Exa scale day with the community. The supercomputing community on the Cube Right back
SUMMARY :
It's the Q. With digital coverage We're back at the celebration of Exa Scale Day. Thank you. And, Mark, Good to see you again. And to give you a feel for the magnitude of the task, of the collaboration between what you guys were doing and Brian's team. developers and others to take advantage of that onboard computational capability you with governments or maybe contractors, you know, kind of building these proprietary off the shelf type products gives you that opportunity to have things that have been proven, have the technology You got to the point where you had visibility of the economics made sense. So I tell people that when you go to the moon Or should we, you know, go where no man has gone before and or woman and You've got to take everything you need to be able to make those decisions you need to make because there's not time to, for, you know, the moon and Mars. the more efficient you could be with parsing out that that bandwidth and to give you ah, B was called C Tam, the Chevy truck access method. future missions and space born to What are you hoping to accomplish? get that back to earth have been processing and get you the answer back. the time value of data I was gonna ask, you know, the real time, And one of the ways you do that is you collect. If you can drive it off a solar, and the cooling is free because it's pretty cold about what you want to get out of, uh, space born to. So, Mr Scientist, if if you need the raw data and you need it now, that's about, and we have A We have a graphic we're gonna put up on DSM information that you can is, uh, if you can reduce the weight by a pound. so you can email them or you can email me david dot volonte at at silicon angle dot com and I'll shoot you that state of the art giant room size computer to take that data we way Well, I was just gonna say, you know, you talked about the opportunities that that space borne computer provides And, uh, thank you very much for all that you guys dio Thank you very much for having me on and everybody out there. Let's do it. Humanity saw the first trillion calculations And and thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Mark Wayne | PERSON | 0.99+ |
Bryan | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
1997 | DATE | 0.99+ |
Mars | LOCATION | 0.99+ |
Bryant | PERSON | 0.99+ |
Earth | LOCATION | 0.99+ |
Dave Volant | PERSON | 0.99+ |
£1 | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
3000 experiments | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Patrick | PERSON | 0.99+ |
five hours | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
November 2nd | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
Tomorrow | DATE | 0.99+ |
I S s National Labs | ORGANIZATION | 0.99+ |
50 per cent | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
20 years | QUANTITY | 0.99+ |
iPhone 10 | COMMERCIAL_ITEM | 0.99+ |
four | QUANTITY | 0.99+ |
2024 | DATE | 0.99+ |
1 | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
earth | LOCATION | 0.99+ |
a week later | DATE | 0.99+ |
two part | QUANTITY | 0.99+ |
Omar | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
Thio College | ORGANIZATION | 0.99+ |
11 | COMMERCIAL_ITEM | 0.99+ |
more than a second | QUANTITY | 0.99+ |
10. 18 | QUANTITY | 0.99+ |
one time | QUANTITY | 0.99+ |
2 second | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
over 100 racks | QUANTITY | 0.98+ |
The Impact of Exascale on Business | Exascale Day
>>from around the globe. It's the Q with digital coverage of exa scale day made possible by Hewlett Packard Enterprise. Welcome, everyone to the Cube celebration of Exa Scale Day. Shaheen Khan is here. He's the founding partner, an analyst at Orion X And, among other things, he is the co host of Radio free HPC Shaheen. Welcome. Thanks for coming on. >>Thanks for being here, Dave. Great to be here. How are you >>doing? Well, thanks. Crazy with doing these things, Cove in remote interviews. I wish we were face to face at us at a supercomputer show, but, hey, this thing is working. We can still have great conversations. And And I love talking to analysts like you because you bring an independent perspective. You're very wide observation space. So So let me, Like many analysts, you probably have sort of a mental model or a market model that you look at. So maybe talk about your your work, how you look at the market, and we could get into some of the mega trends that you see >>very well. Very well. Let me just quickly set the scene. We fundamentally track the megatrends of the Information Age And, of course, because we're in the information age, digital transformation falls out of that. And the megatrends that drive that in our mind is Ayotte, because that's the fountain of data five G. Because that's how it's gonna get communicated ai and HBC because that's how we're gonna make sense of it Blockchain and Cryptocurrencies because that's how it's gonna get transacted on. That's how value is going to get transferred from the place took place and then finally, quantum computing, because that exemplifies how things are gonna get accelerated. >>So let me ask you So I spent a lot of time, but I D. C and I had the pleasure of of the High Performance computing group reported into me. I wasn't an HPC analyst, but over time you listen to those guys, you learning. And as I recall, it was HPC was everywhere, and it sounds like we're still seeing that trend where, whether it was, you know, the Internet itself were certainly big data, you know, coming into play. Uh, you know, defense, obviously. But is your background mawr HPC or so that these other technologies that you're talking about it sounds like it's your high performance computing expert market watcher. And then you see it permeating into all these trends. Is that a fair statement? >>That's a fair statement. I did grow up in HPC. My first job out of school was working for an IBM fellow doing payroll processing in the old days on and and And it went from there, I worked for Cray Research. I worked for floating point systems, so I grew up in HPC. But then, over time, uh, we had experiences outside of HPC. So for a number of years, I had to go do commercial enterprise computing and learn about transaction processing and business intelligence and, you know, data warehousing and things like that, and then e commerce and then Web technology. So over time it's sort of expanded. But HPC is a like a bug. You get it and you can't get rid of because it's just so inspiring. So supercomputing has always been my home, so to say >>well and so the reason I ask is I wanted to touch on a little history of the industry is there was kind of a renaissance in many, many years ago, and you had all these startups you had Kendall Square Research Danny Hillis thinking machines. You had convex trying to make many supercomputers. And it was just this This is, you know, tons of money flowing in and and then, you know, things kind of consolidate a little bit and, uh, things got very, very specialized. And then with the big data craze, you know, we've seen HPC really at the heart of all that. So what's your take on on the ebb and flow of the HPC business and how it's evolved? >>Well, HBC was always trying to make sense of the world, was trying to make sense of nature. And of course, as much as we do know about nature, there's a lot we don't know about nature and problems in nature are you can classify those problems into basically linear and nonlinear problems. The linear ones are easy. They've already been solved. The nonlinear wants. Some of them are easy. Many of them are hard, the nonlinear, hard, chaotic. All of those problems are the ones that you really need to solve. The closer you get. So HBC was basically marching along trying to solve these things. It had a whole process, you know, with the scientific method going way back to Galileo, the experimentation that was part of it. And then between theory, you got to look at the experiment and the data. You kind of theorize things. And then you experimented to prove the theories and then simulation and using the computers to validate some things eventually became a third pillar of off science. On you had theory, experiment and simulation. So all of that was going on until the rest of the world, thanks to digitization, started needing some of those same techniques. Why? Because you've got too much data. Simply, there's too much data to ship to the cloud. There's too much data to, uh, make sense of without math and science. So now enterprise computing problems are starting to look like scientific problems. Enterprise data centers are starting to look like national lab data centers, and there is that sort of a convergence that has been taking place gradually, really over the past 34 decades. And it's starting to look really, really now >>interesting, I want I want to ask you about. I was like to talk to analysts about, you know, competition. The competitive landscape is the competition in HPC. Is it between vendors or countries? >>Well, this is a very interesting thing you're saying, because our other thesis is that we are moving a little bit beyond geopolitics to techno politics. And there are now, uh, imperatives at the political level that are driving some of these decisions. Obviously, five G is very visible as as as a piece of technology that is now in the middle of political discussions. Covert 19 as you mentioned itself, is a challenge that is a global challenge that needs to be solved at that level. Ai, who has access to how much data and what sort of algorithms. And it turns out as we all know that for a I, you need a lot more data than you thought. You do so suddenly. Data superiority is more important perhaps than even. It can lead to information superiority. So, yeah, that's really all happening. But the actors, of course, continue to be the vendors that are the embodiment of the algorithms and the data and the systems and infrastructure that feed the applications. So to say >>so let's get into some of these mega trends, and maybe I'll ask you some Colombo questions and weaken geek out a little bit. Let's start with a you know, again, it was one of this when I started the industry. It's all it was a i expert systems. It was all the rage. And then we should have had this long ai winter, even though, you know, the technology never went away. But But there were at least two things that happened. You had all this data on then the cost of computing. You know, declines came down so so rapidly over the years. So now a eyes back, we're seeing all kinds of applications getting infused into virtually every part of our lives. People trying to advertise to us, etcetera. Eso So talk about the intersection of AI and HPC. What are you seeing there? >>Yeah, definitely. Like you said, I has a long history. I mean, you know, it came out of MIT Media Lab and the AI Lab that they had back then and it was really, as you mentioned, all focused on expert systems. It was about logical processing. It was a lot of if then else. And then it morphed into search. How do I search for the right answer, you know, needle in the haystack. But then, at some point, it became computational. Neural nets are not a new idea. I remember you know, we had we had a We had a researcher in our lab who was doing neural networks, you know, years ago. And he was just saying how he was running out of computational power and we couldn't. We were wondering, you know what? What's taking all this difficult, You know, time. And it turns out that it is computational. So when deep neural nets showed up about a decade ago, arm or it finally started working and it was a confluence of a few things. Thalib rhythms were there, the data sets were there, and the technology was there in the form of GPS and accelerators that finally made distractible. So you really could say, as in I do say that a I was kind of languishing for decades before HPC Technologies reignited it. And when you look at deep learning, which is really the only part of a I that has been prominent and has made all this stuff work, it's all HPC. It's all matrix algebra. It's all signal processing algorithms. are computational. The infrastructure is similar to H B. C. The skill set that you need is the skill set of HPC. I see a lot of interest in HBC talent right now in part motivated by a I >>mhm awesome. Thank you on. Then I wanna talk about Blockchain and I can't talk about Blockchain without talking about crypto you've written. You've written about that? I think, you know, obviously supercomputers play a role. I think you had written that 50 of the top crypto supercomputers actually reside in in China A lot of times the vendor community doesn't like to talk about crypto because you know that you know the fraud and everything else. But it's one of the more interesting use cases is actually the primary use case for Blockchain even though Blockchain has so much other potential. But what do you see in Blockchain? The potential of that technology And maybe we can work in a little crypto talk as well. >>Yeah, I think 11 simple way to think of Blockchain is in terms off so called permission and permission less the permission block chains or when everybody kind of knows everybody and you don't really get to participate without people knowing who you are and as a result, have some basis to trust your behavior and your transactions. So things are a lot calmer. It's a lot easier. You don't really need all the supercomputing activity. Whereas for AI the assertion was that intelligence is computer herbal. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for permission. Less Blockchain. The assertion is that trust is computer ble and, it turns out for trust to be computer ble. It's really computational intensive because you want to provide an incentive based such that good actors are rewarded and back actors. Bad actors are punished, and it is worth their while to actually put all their effort towards good behavior. And that's really what you see, embodied in like a Bitcoin system where the chain has been safe over the many years. It's been no attacks, no breeches. Now people have lost money because they forgot the password or some other. You know, custody of the accounts have not been trustable, but the chain itself has managed to produce that, So that's an example of computational intensity yielding trust. So that suddenly becomes really interesting intelligence trust. What else is computer ble that we could do if we if we had enough power? >>Well, that's really interesting the way you described it, essentially the the confluence of crypto graphics software engineering and, uh, game theory, Really? Where the bad actors air Incentive Thio mined Bitcoin versus rip people off because it's because because there are lives better eso eso so that so So Okay, so make it make the connection. I mean, you sort of did. But But I want to better understand the connection between, you know, supercomputing and HPC and Blockchain. We know we get a crypto for sure, like in mind a Bitcoin which gets harder and harder and harder. Um and you mentioned there's other things that we can potentially compute on trust. Like what? What else? What do you thinking there? >>Well, I think that, you know, the next big thing that we are really seeing is in communication. And it turns out, as I was saying earlier, that these highly computational intensive algorithms and models show up in all sorts of places like, you know, in five g communication, there's something called the memo multi and multi out and to optimally manage that traffic such that you know exactly what beam it's going to and worth Antenna is coming from that turns out to be a non trivial, you know, partial differential equation. So next thing you know, you've got HPC in there as and he didn't expect it because there's so much data to be sent, you really have to do some data reduction and data processing almost at the point of inception, if not at the point of aggregation. So that has led to edge computing and edge data centers. And that, too, is now. People want some level of computational capability at that place like you're building a microcontroller, which traditionally would just be a, you know, small, low power, low cost thing. And people want victor instructions. There. People want matrix algebra there because it makes sense to process the data before you have to ship it. So HPCs cropping up really everywhere. And then finally, when you're trying to accelerate things that obviously GP use have been a great example of that mixed signal technologies air coming to do analog and digital at the same time, quantum technologies coming so you could do the you know, the usual analysts to buy to where you have analog, digital, classical quantum and then see which, you know, with what lies where all of that is coming. And all of that is essentially resting on HBC. >>That's interesting. I didn't realize that HBC had that position in five G with multi and multi out. That's great example and then I o t. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing at the edge on you're seeing sort of new computing architectures, potentially emerging, uh, video. The acquisition of arm Perhaps, you know, amore efficient way, maybe a lower cost way of doing specialized computing at the edge it, But it sounds like you're envisioning, actually, supercomputing at the edge. Of course, we've talked to Dr Mark Fernandez about space born computers. That's like the ultimate edge you got. You have supercomputers hanging on the ceiling of the International space station, but But how far away are we from this sort of edge? Maybe not. Space is an extreme example, but you think factories and windmills and all kinds of edge examples where supercomputing is is playing a local role. >>Well, I think initially you're going to see it on base stations, Antenna towers, where you're aggregating data from a large number of endpoints and sensors that are gathering the data, maybe do some level of local processing and then ship it to the local antenna because it's no more than 100 m away sort of a thing. But there is enough there that that thing can now do the processing and do some level of learning and decide what data to ship back to the cloud and what data to get rid of and what data to just hold. Or now those edge data centers sitting on top of an antenna. They could have a half a dozen GPS in them. They're pretty powerful things. They could have, you know, one they could have to, but but it could be depending on what you do. A good a good case study. There is like surveillance cameras. You don't really need to ship every image back to the cloud. And if you ever need it, the guy who needs it is gonna be on the scene, not back at the cloud. So there is really no sense in sending it, Not certainly not every frame. So maybe you can do some processing and send an image every five seconds or every 10 seconds, and that way you can have a record of it. But you've reduced your bandwidth by orders of magnitude. So things like that are happening. And toe make sense of all of that is to recognize when things changed. Did somebody come into the scene or is it just you know that you know, they became night, So that's sort of a decision. Cannot be automated and fundamentally what is making it happen? It may not be supercomputing exa scale class, but it's definitely HPCs, definitely numerically oriented technologies. >>Shane, what do you see happening in chip architectures? Because, you see, you know the classical intel they're trying to put as much function on the real estate as possible. We've seen the emergence of alternative processors, particularly, uh, GP use. But even if f b g A s, I mentioned the arm acquisition, so you're seeing these alternative processors really gain momentum and you're seeing data processing units emerge and kind of interesting trends going on there. What do you see? And what's the relationship to HPC? >>Well, I think a few things are going on there. Of course, one is, uh, essentially the end of Moore's law, where you cannot make the cycle time be any faster, so you have to do architectural adjustments. And then if you have a killer app that lends itself to large volume, you can build silicon. That is especially good for that now. Graphics and gaming was an example of that, and people said, Oh my God, I've got all these cores in there. Why can't I use it for computation? So everybody got busy making it 64 bit capable and some grass capability, And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Well, I don't really need 64 but maybe I can do it in 32 or 16. So now you do it for that, and then tens, of course, come about. And so there's that sort of a progression of architecture, er trumping, basically cycle time. That's one thing. The second thing is scale out and decentralization and distributed computing. And that means that the inter communication and intra communication among all these notes now becomes an issue big enough issue that maybe it makes sense to go to a DPU. Maybe it makes sense to go do some level of, you know, edge data centers like we were talking about on then. The third thing, really is that in many of these cases you have data streaming. What is really coming from I o t, especially an edge, is that data is streaming and when data streaming suddenly new architectures like F B G. A s become really interesting and and and hold promise. So I do see, I do see FPG's becoming more prominent just for that reason, but then finally got a program all of these things on. That's really a difficulty, because what happens now is that you need to get three different ecosystems together mobile programming, embedded programming and cloud programming. And those are really three different developer types. You can't hire somebody who's good at all three. I mean, maybe you can, but not many. So all of that is challenges that are driving this this this this industry, >>you kind of referred to this distributed network and a lot of people you know, they refer to this. The next generation cloud is this hyper distributed system. When you include the edge and multiple clouds that etcetera space, maybe that's too extreme. But to your point, at least I inferred there's a There's an issue of Leighton. See, there's the speed of light s So what? What? What is the implication then for HBC? Does that mean I have tow Have all the data in one place? Can I move the compute to the data architecturally, What are you seeing there? >>Well, you fundamentally want to optimize when to move data and when to move, Compute. Right. So is it better to move data to compute? Or is it better to bring compute to data and under what conditions? And the dancer is gonna be different for different use cases. It's like, really, is it worth my while to make the trip, get my processing done and then come back? Or should I just developed processing capability right here? Moving data is really expensive and relatively speaking. It has become even more expensive, while the price of everything has dropped down its price has dropped less than than than like processing. So it is now starting to make sense to do a lot of local processing because processing is cheap and moving data is expensive Deep Use an example of that, Uh, you know, we call this in C two processing like, you know, let's not move data. If you don't have to accept that we live in the age of big data, so data is huge and wants to be moved. And that optimization, I think, is part of what you're what you're referring to. >>Yeah, So a couple examples might be autonomous vehicles. You gotta have to make decisions in real time. You can't send data back to the cloud flip side of that is we talk about space borne computers. You're collecting all this data You can at some point. You know, maybe it's a year or two after the lived out its purpose. You ship that data back and a bunch of disk drives or flash drives, and then load it up into some kind of HPC system and then have at it and then you doom or modeling and learn from that data corpus, right? I mean those air, >>right? Exactly. Exactly. Yeah. I mean, you know, driverless vehicles is a great example, because it is obviously coming fast and furious, no pun intended. And also, it dovetails nicely with the smart city, which dovetails nicely with I o. T. Because it is in an urban area. Mostly, you can afford to have a lot of antenna, so you can give it the five g density that you want. And it requires the Layton sees. There's a notion of how about if my fleet could communicate with each other. What if the car in front of me could let me know what it sees, That sort of a thing. So, you know, vehicle fleets is going to be in a non opportunity. All of that can bring all of what we talked about. 21 place. >>Well, that's interesting. Okay, so yeah, the fleets talking to each other. So kind of a Byzantine fault. Tolerance. That problem that you talk about that z kind of cool. I wanna I wanna sort of clothes on quantum. It's hard to get your head around. Sometimes You see the demonstrations of quantum. It's not a one or zero. It could be both. And you go, What? How did come that being so? And And of course, there it's not stable. Uh, looks like it's quite a ways off, but the potential is enormous. It's of course, it's scary because we think all of our, you know, passwords are already, you know, not secure. And every password we know it's gonna get broken. But give us the give us the quantum 101 And let's talk about what the implications. >>All right, very well. So first off, we don't need to worry about our passwords quite yet. That that that's that's still ways off. It is true that analgesic DM came up that showed how quantum computers can fact arise numbers relatively fast and prime factory ization is at the core of a lot of cryptology algorithms. So if you can fact arise, you know, if you get you know, number 21 you say, Well, that's three times seven, and those three, you know, three and seven or prime numbers. Uh, that's an example of a problem that has been solved with quantum computing, but if you have an actual number, would like, you know, 2000 digits in it. That's really harder to do. It's impossible to do for existing computers and even for quantum computers. Ways off, however. So as you mentioned, cubits can be somewhere between zero and one, and you're trying to create cubits Now there are many different ways of building cubits. You can do trapped ions, trapped ion trapped atoms, photons, uh, sometimes with super cool, sometimes not super cool. But fundamentally, you're trying to get these quantum level elements or particles into a superimposed entanglement state. And there are different ways of doing that, which is why quantum computers out there are pursuing a lot of different ways. The whole somebody said it's really nice that quantum computing is simultaneously overhyped and underestimated on. And that is that is true because there's a lot of effort that is like ways off. On the other hand, it is so exciting that you don't want to miss out if it's going to get somewhere. So it is rapidly progressing, and it has now morphed into three different segments. Quantum computing, quantum communication and quantum sensing. Quantum sensing is when you can measure really precise my new things because when you perturb them the quantum effects can allow you to measure them. Quantum communication is working its way, especially in financial services, initially with quantum key distribution, where the key to your cryptography is sent in a quantum way. And the data sent a traditional way that our efforts to do quantum Internet, where you actually have a quantum photon going down the fiber optic lines and Brookhaven National Labs just now demonstrated a couple of weeks ago going pretty much across the, you know, Long Island and, like 87 miles or something. So it's really coming, and and fundamentally, it's going to be brand new algorithms. >>So these examples that you're giving these air all in the lab right there lab projects are actually >>some of them are in the lab projects. Some of them are out there. Of course, even traditional WiFi has benefited from quantum computing or quantum analysis and, you know, algorithms. But some of them are really like quantum key distribution. If you're a bank in New York City, you very well could go to a company and by quantum key distribution services and ship it across the you know, the waters to New Jersey on that is happening right now. Some researchers in China and Austria showed a quantum connection from, like somewhere in China, to Vienna, even as far away as that. When you then put the satellite and the nano satellites and you know, the bent pipe networks that are being talked about out there, that brings another flavor to it. So, yes, some of it is like real. Some of it is still kind of in the last. >>How about I said I would end the quantum? I just e wanna ask you mentioned earlier that sort of the geopolitical battles that are going on, who's who are the ones to watch in the Who? The horses on the track, obviously United States, China, Japan. Still pretty prominent. How is that shaping up in your >>view? Well, without a doubt, it's the US is to lose because it's got the density and the breadth and depth of all the technologies across the board. On the other hand, information age is a new eyes. Their revolution information revolution is is not trivial. And when revolutions happen, unpredictable things happen, so you gotta get it right and and one of the things that these technologies enforce one of these. These revolutions enforce is not just kind of technological and social and governance, but also culture, right? The example I give is that if you're a farmer, it takes you maybe a couple of seasons before you realize that you better get up at the crack of dawn and you better do it in this particular season. You're gonna starve six months later. So you do that to three years in a row. A culture has now been enforced on you because that's how it needs. And then when you go to industrialization, you realize that Gosh, I need these factories. And then, you know I need workers. And then next thing you know, you got 9 to 5 jobs and you didn't have that before. You don't have a command and control system. You had it in military, but not in business. And and some of those cultural shifts take place on and change. So I think the winner is going to be whoever shows the most agility in terms off cultural norms and governance and and and pursuit of actual knowledge and not being distracted by what you think. But what actually happens and Gosh, I think these exa scale technologies can make the difference. >>Shaheen Khan. Great cast. Thank you so much for joining us to celebrate the extra scale day, which is, uh, on 10. 18 on dso. Really? Appreciate your insights. >>Likewise. Thank you so much. >>All right. Thank you for watching. Keep it right there. We'll be back with our next guest right here in the Cube. We're celebrating Exa scale day right back.
SUMMARY :
he is the co host of Radio free HPC Shaheen. How are you to analysts like you because you bring an independent perspective. And the megatrends that drive that in our mind And then you see it permeating into all these trends. You get it and you can't get rid And it was just this This is, you know, tons of money flowing in and and then, And then you experimented to prove the theories you know, competition. And it turns out as we all know that for a I, you need a lot more data than you thought. ai winter, even though, you know, the technology never went away. is similar to H B. C. The skill set that you need is the skill set community doesn't like to talk about crypto because you know that you know the fraud and everything else. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for Well, that's really interesting the way you described it, essentially the the confluence of crypto is coming from that turns out to be a non trivial, you know, partial differential equation. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing Did somebody come into the scene or is it just you know that you know, they became night, Because, you see, you know the classical intel they're trying to put And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Can I move the compute to the data architecturally, What are you seeing there? an example of that, Uh, you know, we call this in C two processing like, it and then you doom or modeling and learn from that data corpus, so you can give it the five g density that you want. It's of course, it's scary because we think all of our, you know, passwords are already, So if you can fact arise, you know, if you get you know, number 21 you say, and ship it across the you know, the waters to New Jersey on that is happening I just e wanna ask you mentioned earlier that sort of the geopolitical And then next thing you know, you got 9 to 5 jobs and you didn't have that before. Thank you so much for joining us to celebrate the Thank you so much. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shaheen Khan | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Vienna | LOCATION | 0.99+ |
Austria | LOCATION | 0.99+ |
MIT Media Lab | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Orion X | ORGANIZATION | 0.99+ |
New Jersey | LOCATION | 0.99+ |
50 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
9 | QUANTITY | 0.99+ |
Shane | PERSON | 0.99+ |
Long Island | LOCATION | 0.99+ |
AI Lab | ORGANIZATION | 0.99+ |
Cray Research | ORGANIZATION | 0.99+ |
Brookhaven National Labs | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Kendall Square Research | ORGANIZATION | 0.99+ |
5 jobs | QUANTITY | 0.99+ |
Cove | PERSON | 0.99+ |
2000 digits | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Danny Hillis | PERSON | 0.99+ |
a year | QUANTITY | 0.99+ |
half a dozen | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
64 | QUANTITY | 0.98+ |
Exa Scale Day | EVENT | 0.98+ |
32 | QUANTITY | 0.98+ |
six months later | DATE | 0.98+ |
64 bit | QUANTITY | 0.98+ |
third pillar | QUANTITY | 0.98+ |
16 | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
HBC | ORGANIZATION | 0.97+ |
one place | QUANTITY | 0.97+ |
87 miles | QUANTITY | 0.97+ |
tens | QUANTITY | 0.97+ |
Mark Fernandez | PERSON | 0.97+ |
zero | QUANTITY | 0.97+ |
Shaheen | PERSON | 0.97+ |
seven | QUANTITY | 0.96+ |
first job | QUANTITY | 0.96+ |
HPC Technologies | ORGANIZATION | 0.96+ |
two | QUANTITY | 0.94+ |
three different ecosystems | QUANTITY | 0.94+ |
every 10 seconds | QUANTITY | 0.94+ |
every five seconds | QUANTITY | 0.93+ |
Byzantine | PERSON | 0.93+ |
Exa scale day | EVENT | 0.93+ |
second thing | QUANTITY | 0.92+ |
Moore | PERSON | 0.9+ |
years ago | DATE | 0.89+ |
HPC | ORGANIZATION | 0.89+ |
three years | QUANTITY | 0.89+ |
three different developer | QUANTITY | 0.89+ |
Exascale Day | EVENT | 0.88+ |
Galileo | PERSON | 0.88+ |
three times | QUANTITY | 0.88+ |
a couple of weeks ago | DATE | 0.85+ |
exa scale day | EVENT | 0.84+ |
D. C | PERSON | 0.84+ |
many years ago | DATE | 0.81+ |
a decade ago | DATE | 0.81+ |
about | DATE | 0.81+ |
C two | TITLE | 0.81+ |
one thing | QUANTITY | 0.8+ |
10. 18 | DATE | 0.8+ |
Dr | PERSON | 0.79+ |
past 34 decades | DATE | 0.77+ |
two things | QUANTITY | 0.76+ |
Leighton | ORGANIZATION | 0.76+ |
11 simple way | QUANTITY | 0.75+ |
21 place | QUANTITY | 0.74+ |
three different segments | QUANTITY | 0.74+ |
more than 100 m | QUANTITY | 0.73+ |
FPG | ORGANIZATION | 0.73+ |
decades | QUANTITY | 0.71+ |
five | QUANTITY | 0.7+ |
Joe Fernandes, Red Hat | Red Hat Summit 2020
>> From around the globe, it's the CUBE with digital coverage of Red Hat Summit 2020 brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is the CUBE's coverage of a Red Hat Summit 2020 happening digitally. We're connecting with Red Hat executives, thought leaders, practitioners, wherever they are around the globe, bringing them remotely into this online event. Happy to welcome back to the program, Joe Fernandez, who's the Vice President and General Manager, of Core Cloud Platforms with Red Hat. Joe, thanks so much for joining us. >> Yeah, thanks for having me. Glad to be here. >> All right, so, Joe, you know, Cloud, of course, has been a conversation we've been having for a lot of years. When I went to Red Hat Summit last year, when I went to IBM, I think last year, there was discussion of moving from kind of chapter one, if you will, to chapter two. Some of the labels that we put on things back in the early days, like Hybrid Cloud and Multicloud, they're coming into a little bit clearer picture. So, let's just give a high level, what you're seeing from your customers when they talk about Hybrid and Multicloud environment? What does that mean to your customers? And therefore, how is Red Hat meeting them where they are? >> Yeah, sure. So, Red Hat obviously, serves an enterprise customer base. And what we've seen in that customer base, really since the start and it's really informed our strategy, is the fact that all their applications aren't going to run in one place, right? So they're really employing a hybrid class strategy, a Hybrid and Multicloud strategy, that spans from their data centers out to a public cloud, typically then out to multiple public clouds as their cloud investments grow, as they move more applications. And now, even out to the edge for many of those customers. So that's the newest footprint that we're getting asked about. So really we think of that as the open hybrid cloud. And you know, our goal is really to provide a consistent platform for applications regardless of where they run across all those environments. >> Yeah. Let's get down a second on that because we've had consistency for quite a while. You look at the largest cloud provider out there, they said, hybrid environment, will give you the exact same hardware that we're running in the public cloud of your bet. You know, that in your environment. Of course, Red Hat's a software company. You've lived across lots of platforms. We're going to Red Hat's entire existence. So, you know, where is that consistency needed? How do you, well, think about how Red Hat does things? Maybe the same and a little different than some of the other players that are then, positioning and even repositioning their hybrid story over the last year or so. >> Yeah. So, we're really excited to see a lot of folks in the industry, including all the major public cloud providers are now talking about Hybrid and talking about these types of initiatives that we've been talking about for quite some time. But yeah, it's a little bit different when we talk about Hybrid Cloud, when we talk about Multicloud, we're talking about being able to run not just in one public cloud and then in a non-premise clients that mirrors that cloud. We're really talking about being able to run across multiple clouds. So having that consistency across, running in, say Amazon to Azure to Google, and then carrying that into your on-premise environments, whether that's on Bare Metal, on VMware, on OpenStack, and then, like I said, out out to the edge, right? So that consistency is important for people who are concerned about how their applications are going to operate in these different environments. Because otherwise, they'd have to manage those differences themselves. I'm speaking as part of Red Hat, right? This is what the company was built on, right? In 20 years ago, it was all about Linux bringing consistency for enterprise applications running across x86 hardware, right? So regardless of who your OEM vendor was, as long as you're building to the x86 standard and leveraging Linux as a base, Red Hat Enterprise Linux became that same consistent operating environment for applications, which is important for our software vendors, but also more importantly for customers themselves as they yep those apps into production. >> Yeah, I guess, you know, last question I have for kind of just the landscape out there. We've been talking for a number of years. When you talk to practitioners, they don't get caught up in the labels that we use in the industry. Do they have a cloud strategy? Yes, most companies have a cloud strategy, and if you ask them is their cloud strategy same today, as it was a quarter ago or a year ago, they say, of course not. Everything's changed. We know in today's day and age, what I was doing a month ago is probably very different from what I am doing today. So, I know you've got a survey that was done of enterprise users. I saw it when it came out a month ago. And, you know, some good data in there. So, you know, where are we? And what data do you have to share with us on kind of the customer adoption with (mumbles). >> Yeah, so I think, you know, we put out a survey not too long ago and we started as, I think, over 60% of customers were adopting a hybrid cloud strategy exactly as I described. Thinking about their applications in terms of, in an environment that spans multiple cloud infrastructures, as well as on-premise footprints. And then, you know, going beyond that, we think that number will grow based on what we saw in that survey. That just mirrors the conversations that I've had with customers, that many of us here at Red Hat have been having with those same customers over the years. Because everybody's in a different spot in terms of their transformation efforts, in terms of their adoption of cloud technologies and what it means for their business. So we need to meet customers where they're at, understand that everybody's at a different spot and then make sure that we can help them make that transition. And it's really an evolution, as opposed to , I think, some people in the past might've thought of as a revolution where all the data centers are going to shut down and everything's going to move all at once. And so helping customers evolve. And that transition is really what Red Hat is all about. >> Yeah. And, so often, Joe, when I talk to some of the vendors out there, when you talk about Hybrid, you talk about Multicloud, it's talking about something you mentioned, it's a box, it's a place, it's, you know, the infrastructure discussion. But when I've been having conversations with a lot of your peers of these interviews for Red Hat Summit. We know that, it's the organization and it's the applications that are hugely important as these changes go and happen. So talk a little bit about that. What's happening to the organization? How are you helping the infrastructure team keep up and the app dev team move forward? >> Yeah, so first, I'll start with, that on the technology side, right? One of the things that that has enabled this type of consistency and portability has been sort of the advent of Linux containers as a standard packaging format that can span across all these different (mumbles), right? So we know that Linux runs in all these different footprints and Linux containers, as a portable packaging format, enables that. And then Kubernetes enables customers to orchestrate containers at scale. So that's really what OpenShift is focused on, is delivering an enterprise Kubernetes platform. Again, spanning all these environments that leverages container-based packaging, provides enterprise Kubernetes orchestration and management, to manage in all those environments. What that then also does on the people front is bring infrastructure and operations teams together, right? Because Kubernetes containers represents the agility for both sides, right? Or application developers, it represents the ability to pay their application and all their dependencies. And know that when they run it in one environment, it will be consistent with how it runs in other environments. So eliminating that problem of, works on my machine, but it doesn't work, you know, in prod or what have you. So it brings consistency for developers. Infrastructure teams, it gives them the ability to basically make decisions around where the best places to run these applications without having to think about that from a technology perspective, but really from things that should matter more, like cost and convenience to customers and performance and so forth. So, I think we see those teams coming together. That being said, it is an evolution in people and process and culture. So we've done a lot of work. We launched a global transformation office. We had previously launched a Red Hat open innovation labs and have done a lot of work with our consulting services and our partners as well, to help with, sort of, people in process evolutions that need to occur to adopt these types of technologies as well as, to move towards a more cloud native approach. >> All right. So Joe, what one of the announcements that made it the show, it is talking about how OpenShift is working with virtualization. So, I think back to the earliest container days, there was a discussion of, "oh, you know, Docker and containers, "it kills VM." Or you know, Cloud of course. Some Cloud services run on VMs, other run on containers, they're serverless. So there's a lot of confusion out there as to. >> Yep. >> What happened, we know in IT, no technology ever dies, everything's always additive. It's figuring out the right solutions and the right bet. So, help us understand what Red Hat is doing when it comes to virtualization in OpenShift and Kubernetes and, how is your approach different than some of what we've already seen in the marketplace? >> Yeah, so definitely we've seen just explosive adoption of containers technology, right? Which has driven the OpenShift business and Red Hat's business overall. So, we expect that to continue, right? More applications moving towards that container-based, packaging and deployment model and leveraging Kubernetes and OpenShift to manage those environments. That being said, as you mentioned, virtualization has been around for a really long time, right? And, predominantly, most applications, today, are running virtualized. And so some of them have made the transition to containers or were built a container native from the start. But many more are still running in VM based environments and may never make that switch. So, what we were looking at is, how do we manage this sort of hybrid environment from the application perspective where you have some applications running in containers, other applications running in VMs? We have platforms like Red Hat, OpenStack, Red Hat Virtualization that leveraged the KVM hypervisor and Red Hat Enterprise Linux to serve apps running in a VM based environment. What we did with Kubernetes is, instead, how could we innovate to have convergence on the orchestration and management fund? And we leveraged the fact that, KVM, you know, a chosen hypervisor, is actually a Linux process that can itself be containerized. And so by running the hypervisor in a container, we can then span VMs that could be managed on that same platform as the containers run. So what you have in OpenShift Virtualization is the ability to use Kubernetes to manage containerized workloads, as well as, standard VM based workloads. And these are full VMs. These aren't micro VMs or, you know, things like Firecracker Kata Container. These are standard VMs that could be, well, Windows guests or Linux guests, running inside those VMs. And so it helps you basically, manage that type of environment where you may be moving to containers and more cloud native approach, but those containers need to interact or work with applications that are still in a VM based deployment environment. And we think it's really exciting, we've demoed it at the last Red Hat Summit. We're going to talk about it even more here, in terms of how we're going to bring those products to market and enable customers. >> Okay, yeah, Joe, let me make sure I understand this because as you said, it is a different approach. So, number one, if I'm moving towards a (mumbles) management solution, this is going to fit natively into what I'm doing. It's not taking some of my traditional management tools and saying, "oh, I also get some visibility containers." There's more, you know, here's my Kubernetes solution. And just some of those containers happen to be virtualized. Did I get that piece right? >> Yeah, I think it's more like... so we know that Kubernetes is going to be in in the environment because we know that, yeah, people are moving application workloads to standard Linux containers. But we also know that virtual machines are going to still exist in that environment. So you can think about it as, how would we enable Kubernetes to manage a virtual machine in the same way that it manages a Linux container? And, what we do there, is we actually, put the VM inside the container, right? So because the VM, specifically with (mumbles) is just a Linux process, and that's what a Linux container is. It's a Linux process, right? So you can run the hypervisor, span the virtual machines, inside of containers. But those virtual machines, are just like any other VM that would run in OpenStack or Red Hat Virtualization or what have you. And you could, vSphere for example. So those are traditional virtual machines, that are now being managed in a Kubernetes environment. And what we're seeing is sort of, this evolution of Kubernetes to take on these new types of workloads. VMs is just one example, of something that you can now manage with Kubernetes. >> Okay. And, help me understand what this means to really the app dev in my application portfolio. Because you know, the original promise of virtualization was, I can just stick my application in a VM and I never need to think about it ever again. And well, that was super helpful when windows NT was going end of life. In 2020, we do find that most companies do want to update their applications, and they are talking about, do I refactor them? Do I make them microservices architecture? I don't want to have that iceberg of an application that I'm just dragging along slowly into the new world. So. >> Yeah. >> What is this virtualization integration with Kubernetes? You mean for the AppDev and the applications? >> Yeah, sure, so what we see customers doing, what we see the application development team is doing is modernizing a lot of their existing applications, right? So they're taking traditional monolithic applications or end tier, like the applications that may run in a VM based environment and they're moving them towards more of a distributed architecture leveraging microservices based approach. But that doesn't happen all at once either, right? So, oftentimes what you see is your microservices, are still connected to VM based applications. Or maybe you're breaking down a monolithic application. The core is still running in a VM, but some of those business functions have now been carved out and containerized. So, you're going to end up in a hybrid environment from the application perspective in terms of how these applications are packaged, and deployed. The question is, what does that mean for your deployment architecture? Does it mean you always have to run a virtualization platform and a container platform together? That's how it's done today, right? OpenShift and Kubernetes run on top of vSphere, they run on top of Amazon and Azure and Google bands, and on top of OpenStack. But what if you could actually just run Kubernetes directly on Bare Metal and manage those types of workloads? That's really sort of the idea. A whole bunch of virtualization solution was based on is, let's just merge VMs natively with Kubernetes in the same way that we manage containers. And then, it can facilitate for the application developer. This evolution of apps that are running in one environment towards apps that are running essentially, in a hybrid environment from how they're packaged and deployed. >> Yeah, absolutely, something I've been hearing for the last year or so, that hybrid deployment, pulling apart application, sometimes it's even, the core piece as you said, is on premises and then I might have some of the more transactional pieces happening in the public cloud. So really interesting. So, how long has Red Hat been working on this? My (mumbles), something, you know, I'm familiar with in the CNCF. I believe it has been around for a couple of years. >> Yeah. >> So talk to us about just kind of how long it took to get here and, fully support stateful applications now. What's the overall roadmap look like? >> Yeah, so, so (mumbles) as a open source project was launched more than two years ago now. As you know, Red Hat really drives all of our development upstream in the open source community. So we launched (mumbles) project. We've been collaborating with other vendors and even customers on that. But then, you know, over time we then decided, how do we bring these technologies to market, which technologies make sense to bring the market? So, (mumbles) is the open source project. OpenShift and OpenShift Virtualization, which is what this feature is referred to commercially, is the product that then we would ship and support for running this in production environments. The capabilities, right. So, I think, those have been evolving as well. So, virtual machines have a specific requirements in terms of not only how they're deployed and managed, but how they connect to storage, how they connect networking, how do you do things like fencing and all sorts of live migration and that type of thing. We've been building out those types of capabilities. They're certainly still more to do there. But it's something that we're really excited about, not just from the perspective of running VMs, but just even more broadly from the perspective of how Kubernetes is expanding to take on new workloads, right? Because Kubernetes has moved far beyond just running, cloud native applications, today, you can run stateful services in containers. You can run things like AI and machine learning and analytics and IoT type services. But it hasn't come for free, right? This has come through a lot of hard work in the Kubernetes community, in the various associated communities, the container communities, communities like (mumbles). But it's all kind of trying to leverage that same automation, that same platform to just do more things. The cool thing is, it'll not just be Red Hat talking about it, but you'll see that from a lot of customers that are doing sessions at our summit this year and beyond. Talking about how, what it means to them. >> Yeah, that's great. Always love hearing the practitioner viewpoint. All right, Joe, I want to give you the final word when it comes to this whole space things kind of move pretty fast, but also we remember it when we first saw it. So, tell us what the customers who were kind of walking away from Red Hat Summit 2020 should be looking at and understanding that they might not have thought about if they were looking at Kubernetes, a year or two ago? >> Yeah, I think a couple of things. One is, yeah, Kubernetes and this whole container ecosystem is continuing to evolve, continuing to add capabilities and continue to expand the types of workloads, that it can run. Red Hat is right in the center of it. It's all happening in open source. Red Hat as a leading contributor to Kubernetes and open source in general, is driving a lot of this innovation. We're working with some great customers and partners, other vendors, who are working side by side with us as well. And I think the most important thing is we understand that it's an evolution for customers, right? So this evolution towards moving applications to the public cloud, adopting a hybrid cloud approach. This evolution in terms of expanding the types of workloads, and how you run and manage them. And that approach is something that we've always helped customers do and we're doing that today as they move out towards embracing a cloud native. >> All right, well, Joe Fernandez, thank you so much for the updates. Congratulations on the launch of OpenShift Virtualization. I definitely look forward to talking to some the customers in finding out that helping them along their hybrid cloud journey. All right. Lots more coverage from the CUBE at Red Hat Summit. I'm Stu Miniman ,and thank you for watching the CUBE.
SUMMARY :
brought to you by Red Hat. and General Manager, of Core Cloud Platforms with Red Hat. Glad to be here. What does that mean to your customers? is the fact that all their applications aren't going to run So, you know, where is that consistency needed? and then, like I said, out out to the edge, right? And what data do you have And that transition is really what Red Hat is all about. and it's the applications that are hugely important and management, to manage in all those environments. So, I think back to the earliest container days, It's figuring out the right solutions and the right bet. is the ability to use Kubernetes And just some of those containers happen to be virtualized. of something that you can now manage with Kubernetes. that I'm just dragging along slowly into the new world. in the same way that we manage containers. sometimes it's even, the core piece as you said, So talk to us about just kind of is the product that then we would All right, Joe, I want to give you the final word and continue to expand the types of workloads, Congratulations on the launch of OpenShift Virtualization.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Fernandez | PERSON | 0.99+ |
Joe Fernandes | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
a month ago | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
a year ago | DATE | 0.99+ |
both sides | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Red Hat Summit 2020 | EVENT | 0.99+ |
Red Hat Summit | EVENT | 0.98+ |
a year | DATE | 0.98+ |
today | DATE | 0.98+ |
OpenShift | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
a quarter ago | DATE | 0.98+ |
over 60% | QUANTITY | 0.97+ |
Multicloud | ORGANIZATION | 0.97+ |
Red Hat Virtualization | TITLE | 0.97+ |
Red Hat Enterprise | TITLE | 0.97+ |
one place | QUANTITY | 0.97+ |
Red Hat | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Windows | TITLE | 0.96+ |
Firecracker | TITLE | 0.96+ |
this year | DATE | 0.96+ |
One | QUANTITY | 0.95+ |
Red Hat Enterprise Linux | TITLE | 0.95+ |
IBM | ORGANIZATION | 0.93+ |
one environment | QUANTITY | 0.93+ |
two ago | DATE | 0.92+ |
windows NT | TITLE | 0.92+ |
x86 | TITLE | 0.91+ |
Ashesh Badani, Red Hat | Red Hat Summit 2020
>>from around the globe. It's the Cube with digital coverage of Red Hat. Summit 2020 Brought to you by Red Hat. >>Yeah. Hi. And welcome back to the Cube's coverage of Red Hat Summit 2020 on stew. Minimum in this year's event, of course, happened globally. Which means we're talking to Red Hat executives, customers and partners where they are around the globe on and happy to welcome back to the program. One of our cube alumni, Badani, who is the senior vice president. Cloud platforms at Red Hat is great to see you. >>Yeah, thanks a lot for having me back on. >>Yeah, absolutely. So you know, the usual wall to wall coverage that we do in San Francisco? Well, it's now the global digital, a little bit of a dispersed architecture to do these environments. Which reminds me a little bit of your world. So, you know, the main keynote stage. You know, Paul's up There is the, you know, new CEO talking about open hybrid cloud. And of course, the big piece of that is, you know, open shift and the various products, you know, in the portfolio there, So ah, personal. We know there's not, you know, big announcements of, you know, launches and the like, But your team and the product portfolio has been going through a lot of changes. A lot of growth since last time we connected. So bring us up to speed as to what we should know about. >>Sure. Thanks s Oh, yes, not not a huge focus around announcements, this summit, especially given everything going on in the world around us today. Ah, but you know, that being said, we continue our open shift journey. We started that well, you know, many years ago. But in 2015 and we had our first release both the stone kubernetes in a container focused platform. Ever since then, you know, we continue to groan to evolve Atlassian count now over 2000 customers globally. I trusted the platform in industries that literally every industry and also obviously every job around around the globe. So that's been great to see you. And last summit, we actually announced a fairly significant enhancement of a platform with a large fortune before big focus around created manageability ability to use operators which is, you know, kubernetes concept to make applications much more manageable. um, you know, when they're being run natively within within the platform, we continue to invest. There s so there's a new release off the platform. Open shift 4.4 based on kubernetes 1.17 big made available to our customers globally. And then really, sort of this this notion of over the air updates right to create a platform that is almost autonomous in nature, you know, acts more like your your your mobile phone in the way you can manage and and update and upgrade. I think that's a key value proposition that, you know, we're providing to our customers. So we're excited to see that and then be able to share that with you. >>Yeah, so a chef won't want to dig into that a little bit. So one of the discussions we've had in the industry for many years is how much consistency there needs to be across my various environments. We know you know Kubernetes is great, but it is not a silver bullet. You know, customers will have clusters. They will have different environments. I have what I do in my data centers or close. I'm using things in the public clouds and might be using different communities offering. So you know, as you said, there's things that Red Hat is doing. But give us a little insight into your customers as to how should they be thinking about it? How do they manage it? One of the new pieces that we're building it into a little bit, of course, from a management sand point is ACM, which I know open shift today, but going toe support some of the other kubernetes options you know down the road. So how should customers be thinking about this? How does Red Hat think about managing? Did this ever complex world >>Yes, So Student should have been talking about this for several years now, right with regard to just the kind of the customers are doing. And let's start with customers for us, because it's all about you know, the value for them so that this year's summit we're announcing some innovation award winners, right? So a couple of interesting ones BMW and Ford, um, you know BMW, you know, building It's next generation autonomous driving platform using containers. And then, you know, police Massive data platform an open ship for doing a lot of interesting work with regard to, uh, bringing together. It's a development team taking advantage of existing investments in hardware and so on, You know, the in place, you know, with the platform. But also, increasingly, companies that are you know, for example, in all accept. All right, so we've got the Argentine Ministry of Health. We've got a large electricity distribution company adopting containers, adopting middleware technology, for example, on open shift until great value. Right. So network alerts when there's electricity outrage going from three minutes to 10 seconds. And so, as you now see more and more customers doing, you know, more and more if you will mission critical activities on these platforms to your points to your question is a really good one is not got clusters running in multiple markets, right? Perhaps in their own data center, across multiple clouds and managing these clusters at scale, it becomes, you know, more, more critical up. And so, you know, we've been doing a bunch of work with regard to the team, and I actually joined us from IBM has been working on this. Let's remember technology for a while, and it's part of Red Hat. We're now releasing in technology preview. Advanced cluster management trying to solve address questions around. What does it mean to manage the lifecycle of the application process? Clusters. How do I monitor and imbue cluster help? You know, regardless of you know, where they run. How do I have consistent security and compliance for my policies across the different clusters. So really excited, right? It is a really interesting technology. It's probably most advanced placement. That's our market. What? IBM working on it. We know. Well, before you know, the team from from there, you know, joined us. And now we're making it much more >>widely available. Yeah, actually, I just want one of things that really impressed some of those customers. First off. Congratulations. 2000 you know, great milestone there. And yeah, we've had We're gonna have some of the opportunity to talk on the cube. Some of those essential services you talk Ministry of Health. Obviously, with a global pandemic on critically environment, energy companies need to keep up and running. I've got Vodafone idea also from India, talking about how communication service is so essential. Pieces and definitely open shift. You know, big piece of this story asst to how they're working and managing and scaling. Um, you know, everybody talks about scale for years, but the current situation around the globe scale something that you know. It's definitely being stressed and strained and understood. What? What? What's really important? Um, another piece. Really interesting. Like to dig in a little bit here. Talk about open shift is you know, we talk kubernetes and we're talking container. But there's still a lot of virtualization out there. And then from an application development standpoint, there's You know what? Let's throw everything away and go all serverless on there. So I understand. Open shift. Io is embracing the full world and all of the options out there. So help us walk through how Red Hat maybe is doing things a little bit differently. And of course, we know anything right Does is based on open source. So let's talk about those pieces >>Yes, to super interesting areas for us. Um, one is the work we're doing based on open source project called Kube Vert, and that's part of the CN CF incubating projects. And that that is the notion off bringing virtualization into containers. And what does that mean? Obviously There are huge numbers of workloads running in which machines globally and more more customers want, you know, one control plane, one environment, one abstraction to manage workloads, whether they're running in containers or in IBM, I believe you sort of say, Can we take workloads that are running in these, uh, give, um, based which machines or, uh, VMS running in a VM based environment and then bring them natively on, run them as containers and managed by kubernetes orchestrate across this distributed cluster that we've talked about? I've been extremely powerful, and it's a very modern approach to modernizing existing applications as well as thinking about building new services. And so that's a technology that we're introducing into the platform and trying to see some early customer interest. Um, around. So, >>you know, I've got ah, no, I'm gonna have a breakout with Joe Fernandez toe talk about this a little bit, but you know what a note is you're working on. That is, you're bringing a VM into the container world and what red hat does Well, because you know your background and what red hat does is, you know, from an operating system you're really close to the application. So one of my concerns, you know, from early days of virtualization was well, let's shut things in a VM and leave it there and not make any changes as opposed to What you're describing is let's help modernize things. You know, I saw one of the announcements talking about How do I take job of workloads and bring them into the cloud? There's a project called Marcus. So once again, do I hear you right? You're bringing V M's into the container world with help to move towards that journey, to modernize everything so that we were doing a modern platform, not just saying, Hey, I can manage it with the tool that I was doing before. But that application, that's the important piece of it. >>Yeah, and it's a really good point, you know, We've you know, so much to govern, probably too little time to do it right, because the one that you touched on is really interesting. Project called caucuses right again. As you rightly pointed out, everything that is open source up, and so that's a way for us to say, Look, if we were to think about Java and be able to run that in a cloud native way, right? And be able to run, um, that natively within a container and be orchestrated again by kubernetes. What would that look like? Right, How much could be reduced density? How much could be improved performance around those existing job applications taking advantage off all the investments that companies have made but make that available in kubernetes and cloud native world. Right? And so that's what the corpus project is about. I'm seeing a lot of interest, you know, and again, because the open source model right, You don't really have companies that are adopting this, right? So there's I think there's a telecom company based out of Europe that's talking about the work that they're already doing with this. And I already blogged about it, talking about, you know, the value from a performance and use of usability perspective that they're getting with that. And then you got So you couple this idea off. How do I take BMC? Bring them into contempt? Right? Right. Existing workloads. Move that in. Run that native check. Right? Uh, the next one. How do I take existing java workloads and bring them into this modern cloud native Kubernetes space world, you know, making progress with that orchestra check. And then the third area is this notion off several lists, right? Which is, you know, I've got new applications, new services. I want to make sure that they're taking advantage, appropriate resources, but only the exact number of resources that require We do that in a way that's native to kubernetes. Right? So we're been working on implementing a K native based technologies as the foundation as the building blocks, um, off the work we're doing around serving and eventing towards leading. Ah, more confortable several institution, regardless of where you run it across any off your platform prints up. And that will also bring the ability to have functions that made available by really any provider in that same platform. So So if you haven't already to put all the pieces together right that we were thinking about this is the center of gravity is a community space platform that we make fully automated, that we make it very operational, make it easy for different. You know, third party pieces to plug in, writes to sort of make sure that it's in trouble in modular and at the same time that start layering on additional Kim. >>Yeah, I'm a lot of topics. As you said, it's Siachin. I'm glad on the serverless piece we're teasing out because it is complicated. You know, there are some that were just like, Well, from my application developer standpoint, I don't >>need to >>think about all that kubernetes and containers pieces because that's why I love it. Serverless. I just developed to it, and the platform takes care of it. And we would look at this year to go and say, Well, underneath that What is it? Is it containers? And the enter was Well, it could be containers. It depends what the platform is doing. So, you know, from from Red Hat's standpoint, you're saying open shift server lists, you know? Yes, it's kubernetes underneath there. But then I heard you talk about, you know, live aware of it is so, um, I saw there's, you know, a partner of Red Hat. It's in the open source community trigger mesh, which was entering one of the questions I had. You know, when I talk to people about serverless most of the time, it's AWS based stuff, not just lambda lots of other services. You know, I didn't interview with Andy Jassy a few years ago, and he said if I was to rebuild AWS today, everything would be built on serverless. So might some of those have containers and kubernetes under it? Maybe, but Amazon might do their own thing, so they're doing really a connection between that. So how does that plug in with what you're doing? Open shift out. All these various open sourced pieces go together. >>Yes, I would expect for us to have partnerships with several startups, right? You know you name, you know, one in our ecosystem. You know, you can imagine as your functions, you know, running on our serverless platform as well as functions provided by any third party, including those that are built and by red hat itself, Uh, you know, for the portal within this platform. Because ultimately, you know, we're building the platform to be operational, to be managed at scale to create greater productively for developments. Right? So for example, one of things we've been working on we are in the area of developer tools. Give the customers ability. Do you have you know, the product that we have is called cordon Ready workspaces. But essentially this notion off, you know, how can we take containers and give work spaces that are easy for remote developers to work with? Great example. Off customer, actually, in India that's been able to rapidly cut down time to go from Dev Productions weeks, you know, introduced because they're using, you know, things like these remote workspaces running in containers. You know, this is based on the eclipse. Ah, Apache, the the CI Project, You know, for this. So this this notion that you know, we're building a platform that can be used by ops teams? Absolutely true, but the same time the idea is, how can we now start thinking about making sure these abstractions are providing are extremely productive for development teams. >>Yeah, it's such an important piece. Last year I got the chance to go to Answerable Fest for the first time, and it was that kind of discussion that was really important, you know, can tools actually help me? Bridge between was traditionally some of those silos that they talked about, You know, the product developer that the Infrastructure and Ops team and the AB Dev teams all get things in their terminology and where they need but common platforms that cut between them. So sounds like similar methodology. We're seeing other piece of the platforms Any other, you know, guidance. You talked about all your customers there. How are they working through? You know, all of these modernizations adopting so many new technologies. Boy, you talked about like Dev ops tooling it still makes my heads. Then when I look at it, some of these charts is all the various tools and pieces that organizations are supposed to help choose and pick. Ah, out of there, they have. So how how is your team helping customers on kind of the organizational side? >>Yes. So we'll do this glass picture. So one is How do you make sure that the platform is working to help these teams? You know, by that? What I mean is, you know, we are introducing this idea and working very closely with our partners globally and on this notion of operators, right, which is every time I want to run data bases. And you know, there's so many different databases. There are, you know, up there, right? No sequel, no sequel. and in a variety of different ones for different use cases. How can you make sure that we make it easy for customers trial and then be able to to deploy them and manage them? Right? So this notion of an operator lifecycle because application much more manageable when they run with data s O. So you make you make it easier for folks to be able to use them. And then the question is, Well, what other? If you will advise to help me get that right So off late, you probably heard, you know, be hired a bunch of industry experts and brought them into red hat around this notion of a global transformation and be able to bring that expertise to know whether you know, it's the So you know, Our Deep in Dev Ops and the Dev Ops Handbook are you know, some of the things that industry is a lot like the Phoenix project and, you know, just just in various different you know what's your business and be able to start saying looking at these are told, music and share ideas with you on a couple that with things like open innovation labs that come from red hat as well as you know, similar kinds of offerings from our various partners around the world to help, you know, ease their transition into the >>All right. So final question I have for you, let's go a little bit high level. You know, as you've mentioned you and I have been having this conversation for a number of years last year or so, I've been hearing some of the really big players out there, ones that are, of course, partners of Red Hat. But they say similar things. So you know, whether it's, you know, Microsoft Azure releasing arc. If it's, you know, VM ware, which much of your open ship customers sit on top of it. But now they have, you know, the Project Pacific piece and and do so many of them talk about this, you know, heterogeneous, multi cloud environment. So how should customers be thinking about red hat? Of course. You partner with everyone, but you know, you do tend to do things a little bit different than everybody else. >>Uh, yeah. I hope we do things differently than everyone else. You know, to deliver value to customers, right? So, for example, all the things that we talk about open ship or really is about industry leading. And I think there's a bit of a transformation that's going on a swell right within the way. How Red Hat approaches things. So Sam customers have known Red Hat in the past in many ways for saying, Look, they're giving me an operating system that's, you know, democratizing, if you will. You know what the provider provides, Why I've been given me for all these years. They provided me an application server, right that, you know, uh, it's giving me a better value than what proprietary price. Increasingly, what we're doing with, you know, the work they're doing around, Let's say whether it's open shift or, you know, the next generation which ization that we talked about so on is about how can we help customers fundamentally transform how it is that they were building deploy applications, both in a new cloud native way. That's one of the existing once and what I really want to 0.2 is now. We've got it least a five year history on the open shift platform to look back at you will point out and say here are customers that are running directly on bare metal shears. Why they find, you know, this virtualization solution that you know that we're providing so interesting Here we have customers running in multiple different environments running on open stack running in these multiple private clouds are sorry public clouds on why they want distribute cluster management across all of them. You know, here's the examples that you know we could provide right? You know, here's the work we've done with, you know, whether it's these, you know, government agencies with private enterprises that we've talked to write, you know, receiving innovation awards for the world been doing together. And so I think our approach really has been more about, you know, we want to work on innovation that is fundamentally impacting customers, transforming them, meeting them where they are moving the four into the world we're going into. But they're also ensuring that we're taking advantage of all the existing investments that they've made in their skills. Right? So the advantage of, for example, the years off limits expertise that they have and saying How can we use that? Don't move you forward. >>Well, a chef's Thank you so much Absolutely. I know the customers I've talked to at Red Hat talking about not only how they're ready for today, but feel confident that they're ready to tackle the challenges of tomorrow. So thanks so much. Congratulations on all the progress and definitely look forward to seeing you again in the future. >>Likewise. Thanks, Ian Stewart. >>All right, I'm still Minuteman. And much more coverage from Red Hat Summit 2020 as always. Thanks for watching the Cube. >>Yeah, Yeah, yeah, yeah, yeah, yeah.
SUMMARY :
Summit 2020 Brought to you by Red Hat. Cloud platforms at Red Hat is great to see you. And of course, the big piece of that is, you know, I think that's a key value proposition that, you know, we're providing to our customers. So you know, as you said, the in place, you know, with the platform. Talk about open shift is you know, we talk kubernetes and we're talking container. you know, one control plane, one environment, one abstraction to manage workloads, So one of my concerns, you know, from early days of virtualization was well, let's shut things in a VM Yeah, and it's a really good point, you know, We've you know, so much to govern, probably too little time to do As you said, it's Siachin. um, I saw there's, you know, a partner of Red Hat. So this this notion that you know, and it was that kind of discussion that was really important, you know, can tools actually help it's the So you know, Our Deep in Dev Ops and the Dev Ops Handbook are you So you know, whether it's, you know, Microsoft Azure releasing arc. You know, here's the work we've done with, you know, whether it's these, you know, government agencies you again in the future. And much more coverage from Red Hat Summit 2020 as Yeah, Yeah, yeah,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ford | ORGANIZATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Ian Stewart | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Joe Fernandez | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Badani | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
three minutes | QUANTITY | 0.99+ |
last year | DATE | 0.98+ |
five year | QUANTITY | 0.98+ |
Red Hat Summit 2020 | EVENT | 0.98+ |
Sam | PERSON | 0.98+ |
Argentine Ministry of Health | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Summit 2020 | EVENT | 0.98+ |
10 seconds | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
first time | QUANTITY | 0.96+ |
over 2000 customers | QUANTITY | 0.96+ |
Ministry of Health | ORGANIZATION | 0.95+ |
one environment | QUANTITY | 0.95+ |
ACM | ORGANIZATION | 0.94+ |
first release | QUANTITY | 0.94+ |
Atlassian | ORGANIZATION | 0.92+ |
Answerable Fest | EVENT | 0.92+ |
few years ago | DATE | 0.92+ |
Paul | PERSON | 0.91+ |
third area | QUANTITY | 0.91+ |
Dev Ops Handbook | TITLE | 0.91+ |
Cube | ORGANIZATION | 0.89+ |
this year | DATE | 0.89+ |
many years ago | DATE | 0.88+ |
Kubernetes | TITLE | 0.87+ |
Kim | PERSON | 0.86+ |
CI Project | ORGANIZATION | 0.84+ |
red hat | TITLE | 0.83+ |
Ops | ORGANIZATION | 0.81+ |
kubernetes | OTHER | 0.8+ |
java | TITLE | 0.79+ |
Red Hat | TITLE | 0.79+ |
1.17 | TITLE | 0.72+ |
2000 | QUANTITY | 0.71+ |
Dev Ops | TITLE | 0.66+ |
Team LPSN, Spain | Technovation World Pitch Summit 2019
>> from Santa Clara, California It's the Cube covering techno ovation. World Pitch Summit 2019 Brought to You by Silicon Angle Media. Now here's Sonia to Gari >> Hi and welcome to the Cube. I'm your host, Sonia to Gari, and we're here at Oracle's Agnew's campus covering techno vacations. World Pitch Summit 2019 a pitch competition in which girls from around the world developed mobile labs in order to create positive change in the world with us. Today we have teen LPs n from Spain. Welcome, and the team members are Paulo Fernandez Rosa's Sandra Cho Manual Gomez, Nouria, Peoria, the CIA, Fernandez and with the beyond Tovar. Welcome to the Cube. Thank you. So your app is called one and where tell us more about that. >> They will, when I'm were easy enough that detects anomalies when you go out to work or run am. It's to ensure woman's safety on it, obtains your location in real time. And if something happens, for example, if you stop or if you're in getting near to your destination, it calls the emergency contact or the emergency service's >> Wow, and so can you tell us how a user would would go through it. Step by step. >> Yes, A first of all you need to establish our contact am. So then you have two different Moz A the start mold, which is a for when you, for example, go running. And when do you stop the up? He takes that anomaly so it sends you a message in case off emergency it goes a the emergency contact on the other mode, it's they take me to a remote. So that's when you, for example, want to go home. And so you you don't follow your route. I am the only they up since you and alert. And in case of emergency, it's Cindy. Um, message to your contact. >> Wow. I feel like that could be really useful. Yes. Is that a big problem in Spain? >> Yes, it's He's actually well into feel better. Okay, Yeah, >> we saw this problem in our community on when they gave us the opportunity to try to help in some way. We thought while we can try to create this application on forgives on on it in our country, there have been a lot of women murdered on kidnapped ennui. A thought that it was something very >> very important. I'm That's amazing. So how did you all come up with this idea? >> A. Well, it'll be gone when we hear about their martyr off Laurel. Wilma It that made us a became aware with the magnitude of the problem, so am I. We wanted to do something that they will will be a helpful for us. So we did this >> application. Wow. And, um what problems or struggles as you go through creating this app? I >> am. Well, I think that the the worst think was the time because we had, like, a really short time to do this application to develop and to develop it because we started in February on, we had to We have a deadline in April. So for us, the time was the most difficult part. Also, the programming, the coding. But that that was because we had to learn coding. So yet the time was our our difficult >> part. If you get funding, where do you see this app in five years? >> Well, a We want to continue developing this up on improving it because we really need this up. We want to add new new languages and also introduce it in a iose to a iPhone users to use it also on in 50 years. We would like a this up to continue working about. Hopefully, maybe a this problem with disappear. >> That's great. Um, so tell us more about your experience at Tech Novation. How did you all meet? And why did you decide to join techno vacation? Tell >> me. So we discovered generation in the high school. Our technology teacher air showed as the contest, and we decided to join. And we're old friends. So it was a, like, easy to work because we already know each other. So am that's the best part. And we won't really wanted to do something that could be useful for us. So we decided to to start the Italians with that idea. >> That's awesome. What? What's been like the best experience a part of the experience so far? >> A this trip, actually, Yeah, it is being amazing. I am. It's actually one of the best rips off my life, and we're all having a great time here. >> That's also, um So, uh, thanks so much for coming on. We really appreciate it. And good luck for tonight. Thank you. This is team LPs n from Spain. Thanks so much for watching Stay tuned for more
SUMMARY :
from Santa Clara, California It's the Cube covering Welcome, and the team members And if something happens, for example, if you stop or if you're in getting near to your destination, Wow, and so can you tell us how a user would would go through it. And so you you don't follow your route. Is that a big problem in Spain? Yes, it's He's actually well into feel better. we saw this problem in our community on when they gave us the opportunity to So how did you all come up with this idea? So we did this I But that that was because we had to learn coding. If you get funding, where do you see this app in five years? Well, a We want to continue developing this up on improving it because we And why did you decide to join techno vacation? So we decided to to start the Italians with that idea. What's been like the best experience a part of the experience so far? It's actually one of the best rips off my life, And good luck for tonight.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sonia | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
April | DATE | 0.99+ |
February | DATE | 0.99+ |
Cindy | PERSON | 0.99+ |
Gari | PERSON | 0.99+ |
Tech Novation | ORGANIZATION | 0.99+ |
Fernandez | PERSON | 0.99+ |
Santa Clara, California | LOCATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
Wilma | PERSON | 0.99+ |
World Pitch Summit 2019 | EVENT | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.98+ |
iPhone | COMMERCIAL_ITEM | 0.98+ |
tonight | DATE | 0.98+ |
Technovation World Pitch Summit 2019 | EVENT | 0.98+ |
Nouria | PERSON | 0.97+ |
Today | DATE | 0.97+ |
Tovar | PERSON | 0.96+ |
Laurel | PERSON | 0.95+ |
Paulo Fernandez Rosa | PERSON | 0.93+ |
iose | TITLE | 0.89+ |
one | QUANTITY | 0.89+ |
Peoria | PERSON | 0.82+ |
Cube | ORGANIZATION | 0.76+ |
first | QUANTITY | 0.75+ |
Sandra Cho | PERSON | 0.68+ |
Agnew | ORGANIZATION | 0.59+ |
Italians | PERSON | 0.49+ |
LPs | ORGANIZATION | 0.49+ |
Gomez | PERSON | 0.4+ |
LPSN | ORGANIZATION | 0.34+ |