Image Title

Search Results for kernel:

Google's PoV on Confidential Computing NO PUB


 

>> Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start, and then Patricia you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm honing a lot of interesting activities in Google and again, security or infrastructure securities that I usually hone, and we're talking about encryption, Antware encryption, and confidential computing is a part of portfolio. In additional areas that I contribute to get with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operating your confidential environment to have end to end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it, okay. Patricia? >> Well I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologies from large corporations, institutions, and a lot of success for startups as well. And we have two main goals. First, we work side by side with some of our largest, more strategic or most strategic customers and we help them solve complex engineering technical problems. And second, we are device Google and Google Cloud engineering and product management on emerging trends in technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent, thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool. And it's one of the tools in our toolbox. And confidential computing is a way how would help our customers to complete this very interesting end to end lifecycle of their data. And when customers bring in the data to Cloud and want to protect it, as they ingest it to the Cloud, they protect it address when they store data in the Cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to Cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to Cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential matters. Because at the end of the day it reduces more and more the customers thrush boundaries and the attack surface, that's about reducing that periphery, the boundary, in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now we are also encrypting data while in use. And among other beneficial I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry. Even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where you are a customer is actually trying to get a finance on an asset, let's say a boat or a house and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the of the data. >> Interesting, and I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can, because there's a narrative out there that says confidential computing is a marketing ploy. I talked about this upfront, by Cloud providers that are just trying to placate people that are scared of the Cloud. And I'm presuming you don't agree with that but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems, it is overhyped by Cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine, it's a crazy statement. But the most importantly is we mixing multiple concepts I guess. And exactly as Patricia said, we need to look at the end-to-end story not again the mechanism of how confidential computing trying to again execute and protect customer's data, and why it's so critically important. Because what confidential computing was able to do it's in addition to isolate our tenants in multi-tenant environments the Cloud over. To offer additional stronger isolation, we called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants that's running on the same host but also us, because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment but also incredibly important, stronger isolation of our customers. So tenants from us, we also writing code, we also software providers will also make mistakes or have some zero days sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants, and amongst those tenants, they're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together this very sensitive data, knowing that this particular protection is available to them. >> Okay, thank you, appreciate that. And I, you know, I think malicious code is often a threat model missed in these narratives. You know, operator access, yeah, could maybe I trust my Clouds provider, but if I can fence off your access even better I'll sleep better at night. Separating a code from the data, everybody's arm Intel, AM, Invidia, others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and industry way of dealing with confidential computing is to ensure as it's three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any Cloud can, something that Google actually pioneered in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 code Titan. Those our specific ASIC specific, again inch by inch system on every single motherboard that we have, that ensures that your low level former, your actually system code, your kernel, the most powerful system, is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing concluded. But for confidential computing what we have to change we bring in a MD again, future silicon vendors, and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity not only our software and our firmware but also firmware and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of the secure processor. It's special Asics best, specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes, or every single worker thread in our Spark capability. We offer all of that, and those keys are not available to us. It's the best keys ever in encryption space. Because when we are talking about encryption the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypt it enough. But the case in confidential computing quite so revolutionary technology, ask Cloud providers who don't have access to the keys. They're sitting in the hardware and they fed to memory controller. And it means when Hypervisors that also know about these wonderful things, saying I need to get access to the memories that this particular VM I'm trying to get access to. They do not encrypt the data, they don't have access to the key. Because those keys are random, ephemeral and VM, but the most importantly in hardware not exportable. And it means now you will be able to have this very interesting role that customers all Cloud providers, will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you're running in VM you actually see your memory in clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, you will not be able to do it. Now you'll see cybernet. And it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified, and OS is modified such way to provide integrity. It means even OS that you're running in UVM bucks is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine, Dave, that's increasing it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance, and scales as they would expect from Cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well you know you've already given me guarantees as a Cloud provider that you don't have access to my data but this gives another level of assurance. Key management as they say is key. Now you're not, humans aren't managing the keys the machines are managing them. So Patricia, my question to you is in addition to, you know, let's go pre-confidential computing days what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality the customer cares then they want to know whether their systems are protected from outside or unauthorized access. And that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret, right? The code is actually looking at the data only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered, with or impacted by outside actors. And what confidential computing insures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data it's also it has not been tempered and preserves integrity. I would also say that this is all verifiable. So you have attestation, and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tempered with. Confidentiality and integrity of code and data. >> Got it, okay, thank you. You know, Nelly, you mentioned, I think I heard you say that the applications, it's transparent,you don't have to change the application it just comes for free essentially. And I'm, we showed some various parts of the stack before. I'm curious as to what's affected but really more importantly what is specifically Google's value add? You know, how do partners, you know, participate in this? The ecosystem or maybe said another way how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees actually a lot of works was done by community. Google is very much operate and open. So again, our operating system we working in this operating system repository OS vendors to ensure that all capabilities that we need is part of their kernels, are part of their releases, and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors, kernel, host kernel, to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this role. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is pulling the lead and also announcing the trusted domain extension very similar architecture and no surprise, it's again a lot of work done with our partners to again, convince, work with them, and make this capability available. The same with ARM this year, actually last year, ARM unknowns are future design for confidential computing. It's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing. For example, simply to mention to ensure interop, as you mentioned, between different confidential environments of Cloud providers. We want to ensure that they can attest to each other. Because when you're communicating with different environments, you need to trust them. And if it's running on different Cloud providers you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at the station, the community based systems that we want to build and influence and work with ARM and every other Cloud providers to ensure that they can interrupt. And it means it doesn't matter where confidential workloads will be hosted but they can exchange the data in secure, verifiable, and controlled by customers way. And to do it, we need to continue what we are doing. Working open again and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let talk about data sovereignty, because when you think about data sharing you think about data sharing across, you know, the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, you know, the technology industry and sometimes is problematic. I know, you know, there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you you know, when you delete data, can you actually prove the data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect, so for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses at all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption, and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software, stack, any operations, that is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the Cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data, an insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty we care about whether it resides, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment. That the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity, safety of the confidential computing environment. And that's why we believe confidential computing is one, necessary and essential technology that will allow us to ensure data sovereignty especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed, so I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post you guys sent in some predictions, and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like, you know, this decade in, in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it'll become utility. It'll become TLS. As of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heading and heading, I don't know if we are there yet yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you, and Patricia, what's your prediction? >> I would double that and say, hey, in the future, in the very near future you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, If I say mode of operation, I like to compare that, today is inconceivable if we talk to the young technologists. It's inconceivable to think that at some point in history and I happen to be alive that we had data at address that was not encrypted. Data in transit, that was not encrypted. And I think that we will be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus, I think the beauty of the this industry is because there's so much competition this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much.

Published Date : Feb 10 2023

SUMMARY :

Patricia, great to have you. and then Patricia you can weigh in. In additional areas that I contribute to Got it, okay. of the CTO, OCTO for Excellent, thank you in the data to Cloud into the architecture a bit and privacy of the of the data. but I'm going to push you a is available to them. we could stay with you and they fed to memory controller. So Patricia, my question to you is and integrity of the data and of the code. that the applications, and ideas of our partners to this role is when you you know, and that the data will be only used of the enforcement. and we will support encrypted traffic. and I happen to be alive and we can double click

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

last yearDATE

0.99+

2017DATE

0.99+

two partiesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

second pointQUANTITY

0.99+

FirstQUANTITY

0.99+

ARMORGANIZATION

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

two decades agoDATE

0.99+

AsicsORGANIZATION

0.99+

secondQUANTITY

0.99+

Gaia XORGANIZATION

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.98+

seven yearsQUANTITY

0.98+

OCTOORGANIZATION

0.98+

one thoughtQUANTITY

0.98+

a decade agoDATE

0.98+

this yearDATE

0.98+

10 years agoDATE

0.98+

InvidiaORGANIZATION

0.98+

'23DATE

0.98+

todayDATE

0.98+

CloudTITLE

0.98+

three pillarsQUANTITY

0.97+

one wayQUANTITY

0.97+

hundred percentQUANTITY

0.97+

zero daysQUANTITY

0.97+

three main propertyQUANTITY

0.95+

third pillarQUANTITY

0.95+

two main goalsQUANTITY

0.95+

CTOORGANIZATION

0.93+

NellPERSON

0.9+

KubernetesTITLE

0.89+

every single VMQUANTITY

0.86+

NellyORGANIZATION

0.83+

Google CloudTITLE

0.82+

every single workerQUANTITY

0.77+

every single nodeQUANTITY

0.74+

AMORGANIZATION

0.73+

doubleQUANTITY

0.71+

single motherboardQUANTITY

0.68+

single siliconQUANTITY

0.57+

SparkTITLE

0.53+

kernelTITLE

0.53+

inchQUANTITY

0.48+

Breaking Analysis: Google's PoV on Confidential Computing


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)

Published Date : Feb 10 2023

SUMMARY :

bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Rob HoofPERSON

0.99+

Cheryl KnightPERSON

0.99+

Nelly PorterPERSON

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BroncoPERSON

0.99+

2019DATE

0.99+

Ken SchiffmanPERSON

0.99+

IntelORGANIZATION

0.99+

AMDORGANIZATION

0.99+

2017DATE

0.99+

ARMORGANIZATION

0.99+

AemORGANIZATION

0.99+

NelliePERSON

0.99+

Kristin MartinPERSON

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

Patricia FlorissiPERSON

0.99+

oneQUANTITY

0.99+

MetaORGANIZATION

0.99+

twoQUANTITY

0.99+

thirdQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

second pointQUANTITY

0.99+

two expertsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

two decades agoDATE

0.99+

'23DATE

0.99+

eachQUANTITY

0.99+

a decade agoDATE

0.99+

threeQUANTITY

0.99+

zero daysQUANTITY

0.98+

fourQUANTITY

0.98+

OCTOORGANIZATION

0.98+

todayDATE

0.98+

Show Wrap | CloudNativeSecurityCon 23


 

>> Hey everyone. Welcome back to theCUBE's coverage day two of CloudNative Security CON 23. Lisa Martin here in studio in Palo Alto with John Furrier. John, we've had some great conversations. I've had a global event. This was a global event. We had Germany on yesterday. We had the Boston Studio. We had folks on the ground in Seattle. Lot of great conversations, a lot of great momentum at this event. What is your number one takeaway with this inaugural event? >> Well, first of all, our coverage with our CUBE alumni experts coming in remotely this remote event for us, I think this event as an inaugural event stood out because one, it was done very carefully and methodically from the CNCF. I think they didn't want to overplay their hand relative to breaking out from CUBE CON So Kubernetes success and CloudNative development has been such a success and that event and ecosystem is booming, right? So that's the big story is they have the breakout event and the question was, was it a good call? Was it successful? Was it going to, would the dog hunt as they say, in this case, I think the big takeaway is that it was successful by all measures. One, people enthusiastic and confident that this has the ability to stand on its own and still contribute without taking away from the benefits and growth of Kubernetes CUBE CON and CloudNative console. So that was the key. Hallway conversations, the sessions all curated and developed properly to be different and focused for that reason. So I think the big takeaway is that the CNCF did a good job on how they rolled this out. Again, it was very intimate event small reminds me of first CUBE CON in Seattle, kind of let's test it out. Let's see how it goes. Again, clearly it was people successful and they understood why they're doing it. And as we commented out in our earlier segments this is not something new. Amazon Web Services has re:Invent and re:Inforce So a lot of parallels there. I see there. So I think good call. CNCF did the right thing. I think this has legs. And then as Dave pointed out, Dave Vellante, on our last keynote analysis was the business model of the hackers is better than the business model of the industry. They're making more money, it costs less so, you know, they're playing offense and the industry playing defense. That has to change. And as Dave pointed out we have to make the cost of hacking and breaches and cybersecurity higher so that the business model crashes. And I think that's the strategic imperative. So I think the combination of the realities of the market globally and open source has to go faster. It's good to kind of decouple and be highly cohesive in the focus. So to me that's the big takeaway. And then the other one is, is that there's a lot more security problems still unresolved. The emphasis on developers productivity is at risk here, if not solved. You saw supply chain software, again, front and center and then down in the weeds outside of Kubernetes, things like BIND and DNS were brought up. You're seeing the Linux kernel. Really important things got to be paid attention to. So I think very good call, very good focus. >> I would love if for us to be able to, as the months go on talk to some of the practitioners that actually got to attend. There were 72 sessions, that's a lot of content for a small event. Obviously to your point, very well curated. We did hear from some folks yesterday who were just excited to get the community back together in person. To your point, having this dedicated focus on CloudNativesecurity is incredibly important. You talked about, you know, the offense defense, the fact that right now the industry needs to be able to pivot from being on defense to being on offense. This is a challenging thing because it is so lucrative for hackers. But this seems to be from what we've heard in the last couple days, the right community with the right focus to be able to make that pivot. >> Yeah, and I think if you look at the success of Kubernetes, 'cause again we were there at theCUBE first one CUBE CON, the end user stories really drove end user participation. Drove the birth of Kubernetes. Left some of these CloudNative early adopters early pioneers that were using cloud hyperscale really set the table for CloudNative CON. I think you're seeing that here with this CloudNative SecurityCON where I think we're see a lot more end user stories because of the security, the hairs on fire as we heard from Madrona Ventures, you know, as they as an investor you have a lot of use cases out there where customers are leaning in with getting the rolling up their sleeves, working with open source. This has to be the driver. So I'm expecting to see the next level of SecurityCON to be end user focused. Much more than vendor focused. Where CUBECON was very end user focused and then attracted all the vendors in that grew the industry. I expect the similar pattern here where end user action will be very high at the beginning and that will essentially be the rising tide for the vendors to be then participating. So I expect almost a similar trajectory to CUBECON. >> That's a good path that it needs to all be about all the end users. One of the things I'm curious if what you heard was what are some of the key factors that are going to move CloudNative Security forward? What did you hear the last two days? >> I heard that there's a lot of security problems and no one wants to kind of brag about this but there's a lot of under the hood stuff that needs to get taken care of. So if automation scales, and we heard that from one of the startups we've just interviewed. If automation and scale continues to happen and with the business model of the hackers still booming, security has to be refactored quickly and there's going to be an opportunity structurally to use the cloud for that. So I think it's a good opportunity now to get dedicated focus on fixing things like the DNS stuff old school under the hood, plumbing, networking protocols. You're going to start to see this super cloud-like environment emerge where data's involved, everything's happening and so security has to be re imagined. And I think there's a do over opportunity for the security industry with CloudNative driving that. And I think this is the big thing that I see as an opportunity to, from a story standpoint from a coverage standpoint is that it's a do-over for security. >> One of the things that we heard yesterday is that there's a lot of it, it's a pretty high percentage of organizations that either don't have a SOCK or have a very primitive SOCK. Which kind of surprised me that at this day and age the risks are there. We talked about that today's focus and the keynote was a lot about the software supply chain and what's going on there. What did you hear in terms of the appetite for organizations through the voice of the practitioner to say, you know what guys, we got to get going because there's going to be the hackers are they're here. >> I didn't hear much about that in the coverage 'cause we weren't in the hallways. But from reading the tea leaves and talking to the folks on the ground, I think there's an implied like there's an unlimited money from customers. So it's a very robust from the data infrastructure stack building we cover with the angel investor Kane you're seeing data infrastructure's going to be part of the solution here 'cause data and security go hand in hand. So everyone's got basically checkbook wide open everyone wants to have the answer. And we commented that the co-founder of Palo Alto you had on our coverage yesterday was saying that you know, there's no real platform, there's a lot of tools out there. People will buy anything. So there's still a huge appetite and spend in security but the answer's not going to more tool sprawling. It's going to more platform auto, something that enables automation, fix some of the underlying mechanisms involved and fix it fast. So to me I think it's going to be a robust monetary opportunity because of the demand on the business side. So I don't see that changing at all and I think it's going to accelerate. >> It's a great point in terms of the demand for the business side because as we know as we said yesterday, the next Log4j is out there. It's not a matter of if this happens again it's when, it's the extent, it's how frequent we know that. So organizations all the way up to the board have to be concerned about brand reputation. Nobody wants to be the next big headline in terms of breaches and customer data being given to hackers and hackers making all this money on that. That has to go all the way up to the board and there needs to be alignment between the board and the executives at the organization in terms of how they're going to deal with security, and now. This is not a conversation that can wait. Yeah, I mean I think the five C's we talked about yesterday the culture of companies, the cloud is an enabler, you've got clusters of servers and capabilities, Kubernetes clusters, you've got code and you've got all kinds of, you know, things going on there. Each one has elements that are at risk for hacking, right? So that to me is something that's super important. I think that's why the focus on security's different and important, but it's not going to fork the main event. So that's why I think the spin out was, spinout, or the new event is a good call by the CNCF. >> One of the things today that struck me they're talking a lot about software supply chain and that's been in the headlines for quite a while now. And a stat that was shared this morning during the keynote just blew my brains that there was a 742% increase in the software supply chain attacks occurring over the last three years. It's during Covid times, that is a massive increase. The threat landscape is just growing so amorphously but organizations need to help dial that down because their success and the health of the individuals and the end users is at risk. Well, Covid is an environment where everyone's kind of working at home. So there was some disruption to infrastructure. Also, when you have change like that, there's opportunities for hackers, they'll arbitrage that big time. But I think general the landscape is changing. There's no perimeter anymore. It's CloudNative, this is where it is and people who are moving from old IT to CloudNative, they're at risk. That's why there's tons of ransomware. That's why there's tons of risk. There's just hygiene, from hygiene to architecture and like Nick said from Palo Alto, the co-founder, there's not a lot of architecture in security. So yeah, people have bulked up their security teams but you're going to start to see much more holistic thinking around redoing security. I think that's the opportunity to propel CloudNative, and I think you'll see a lot more coming out of this. >> Did you hear any specific information on some of the CloudNative projects going on that really excite you in terms of these are the right people going after the right challenges to solve in the right direction? >> Well I saw the sessions and what jumped out to me at the sessions was it's a lot of extensions of what we heard at CUBECON and I think what they want to do is take out the big items and break 'em out in security. Kubescape was one we just covered. They want to get more sandbox type stuff into the security side that's very security focused but also plays well with CUBECON. So we'll hear more about how this plays out when we're in Amsterdam coming up in April for CUBECON to hear how that ecosystem, because I think it'll be kind of a relief to kind of decouple security 'cause that gives more focus to the stakeholders in CUBECON. There's a lot of issues going on there and you know service meshes and whatnot. So it's a lot of good stuff happening. >> A lot of good stuff happening. One of the things that'll be great about CUBECON is that we always get the voice of the customer. We get vendors coming on with the voice of the customer talking about and you know in that case how they're using Kubernetes to drive the business forward. But it'll be great to be able to pull in some of the security conversations that spin out of CloudNative Security CON to understand how those end users are embracing the technology. You brought up I think Nir Zuk from Palo Alto Networks, one of the themes there when Dave and I did their Ignite event in December was, of 22, was really consolidation. There are so many tools out there that organizations have to wrap their heads around and they need to be able to have the right enablement content which this event probably delivered to figure out how do we consolidate security tools effectively, efficiently in a way that helps dial down our risk profile because the risks just seem to keep growing. >> Yeah, and I love the technical nature of all that and I think this is going to be the continued focus. Chris Aniszczyk who's the CTO listed like E and BPF we covered with Liz Rice is one of the most three important points of the conference and it's just, it's very nerdy and that's what's needed. I mean it's technical. And again, there's no real standards bodies anymore. The old days developers I think are super important to be the arbiters here. And again, what I love about the CNCF is that they're developer focused and we heard developer first even in security. So you know, this is a sea change and I think, you know, developers' choice will be the standards bodies. >> Lisa: Yeah, yeah. >> They decide the future. >> Yeah. >> And I think having the sandboxing and bringing this out will hopefully accelerate more developer choice and self-service. >> You've been talking about kind of putting the developers in the driver's seat as really being the key decision makers for a while. Did you hear information over the last couple of days that validates that? >> Yeah, absolutely. It's clearly the fact that they did this was one. The other one is, is that engineering teams and dev teams and script teams, they're blending together. It's not just separate silos and the ones that are changing their team dynamics, again, back to the culture are winning. And I think this has to happen. Security has to be embedded everywhere in making it frictionless and to provide kind of the guardrail so developers don't slow down. And I think where security has become a drag or an anchor or a blocker has been just configuration of how the organization's handling it. So I think when people recognize that the developers are in charge and they're should be driving the application development you got to make sure that's secure. And so that's always going to be friction and I think whoever does it, whoever unlocks that for the developer to go faster will win. >> Right. Oh, that's what I'm sure magic to a developer's ear is the ability to go faster and be able to focus on co-development in a secure fashion. What are some of the things that you're excited about for CUBECON. Here we are in February, 2023 and CUBECON is just around the corner in April. What are some of the things that you're excited about based on the groundswell momentum that this first inaugural CloudNative Security CON is generating from a community, a culture perspective? >> I think this year's going to be very interesting 'cause we have an economic challenge globally. There's all kinds of geopolitical things happening. I think there's going to be very entrepreneurial activity this year more than ever. I think you're going to see a lot more innovative projects ideas hitting the table. I think it's going to be a lot more entrepreneurial just because the cycle we're in. And also I think the acceleration of mainstream deployments of out of the CNCF's main event CUBECON will happen. You'll see a lot more successes, scale, more clarity on where the security holes are or aren't. Where the benefits are. I think containers and microservices are continuing to surge. I think the Cloud scale hyperscale as Amazon, Azure, Google will be more aggressive. I think AI will be a big theme this year. I think you can see how data is going to infect some of the innovation thinking. I'm really excited about the data infrastructure because it powers a lot of things in the Cloud. So I think the Amazon Web Services, Azure next level gen clouds will impact what happens in the CloudNative foundation. >> Did you have any conversations yesterday or today with respect to AI and security? Was that a focus of anybody's? Talk to me about that. >> Well, I didn't hear any sessions on AI but we saw some demos on stage. But they're teasing out that this is an augmentation to their mission, right? So I think a lot of people are looking at AI as, again, like I always said there's the naysayers who think it's kind of a gimmick or nothing to see here, and then some are just going to blown away. I think the people who are alpha geeks and the industry connect the dots and understand that AI is going to be an accelerant to a lot of heavy lifting that was either manual, you know, hard to do things that was boring or muck as they say. I think that's going to be where you'll see the AI stories where it's going to accelerate either ways to make security better or make developers more confident and productive. >> Or both. >> Yeah. So definitely AI will be part of it. Yeah, definitely. One of the things too that I'm wondering if, you know, we talk about CloudNative and the goal of it, the importance of it. Do you think that this event, in terms of what we were able to see, obviously being remote the event going on in Seattle, us being here in Palo Alto and Boston and guests on from Seattle and Germany and all over, did you hear the really the validation for why CloudNative Security why CloudNative is important for organizations whether it's a bank or a hospital or a retailer? Is that validation clear and present? >> Yeah, absolutely. I think it was implied. I don't think there was like anyone's trying to debate that. I think this conference was more of it's assumed and they were really trying to push the ability to make security less defensive, more offensive and more accelerated into the solving the problems with the businesses that are out there. So clearly the CloudNative community understands where the security challenges are and where they're emerging. So having a dedicated event will help address that. And they've got great co-chairs too that put it together. So I think that's very positive. >> Yeah. Do you think, is it possible, I mean, like you said several times today so eloquently the industry's on the defense when it comes to security and the hackers are on the offense. Is it really possible to make that switch or obviously get some balances. As technology advances and industry gets to take advantage of that, so do the hackers, is that balance achievable? >> Absolutely. I mean, I think totally achievable. The question's going to be what's the environment going to be like? And I remember as context to understanding whether it's viable or not, is to look at, just go back 13 years ago, I remember in 2010 Amazon was viewed as an unsecure environment. Everyone's saying, "Oh, the cloud is not secure." And I remember interviewing Steve Schmidt at AWS and we discussed specifically how Amazon Cloud was being leveraged by hackers. They made it more complex for the hackers. And he said, "This is just the beginning." It's kind of like barbed wire on a fence. It's yeah, you're not going to climb it so people can get over it. And so since then what's happened is the Cloud has become more secure than on premises for a lot of either you know, personnel reasons, culture reasons, not updating, you know, from patches to just being insecure to be more insecure. So that to me means that the flip the script can be flipped. >> Yeah. And I think with CloudNative they can build in automation and code to solve some of these problems and make it more complex for the hacker. >> Lisa: Yes. >> And increase the cost. >> Yeah, exactly. Make it more complex. Increase the cost. That'll be in interesting journey to follow. So John, here we are early February, 2023 theCUBE starting out strong as always. What year are we in, 12? Year 12? >> 13th year >> 13! What's next for theCUBE? What's coming up that excites you? >> Well, we're going to do a lot more events. We got the theCUBE in studio that I call theCUBE Center as kind of internal code word, but like, this is more about getting the word out that we can cover events remotely as events are starting to change with hybrid, digital is going to be a big part of that. So I think you're going to see a lot more CUBE on location. We're going to do, still do theCUBE and have theCUBE cover events from the studio to get deeper perspective because we can then bring people in remote through our our studio team. We can bring our CUBE alumni in. We have a corpus of content and experts to bring to table. So I think the coverage will be increased. The expertise and data will be flowing through theCUBE and so Cube Center, CUBE CUBE Studio. >> Lisa: Love it. >> Will be a integral part of our coverage. >> I love that. And we have such great conversations with guests in person, but also virtually, digitally as well. We still get the voices of the practitioners and the customers and the vendors and the partner ecosystem really kind of lauded loud and clear through theCUBE megaphone as I would say. >> And of course getting the clips out there, getting the highlights. >> Yeah. >> Getting more stories. No stories too small for theCUBE. We can make it easy to get the best content. >> The best content. John, it's been fun covering CloudNative security CON with you with you. And Dave and our guests, thank you so much for the opportunity and looking forward to the next event. >> John: All right. We'll see you at Amsterdam. >> Yeah, I'll be there. We want to thank you so much for watching TheCUBES's two day coverage of CloudNative Security CON 23. We're live in Palo Alto. You are live wherever you are and we appreciate your time and your view of this event. For John Furrier, Dave Vellante, I'm Lisa Martin. Thanks for watching guys. We'll see you at the next show.

Published Date : Feb 3 2023

SUMMARY :

We had folks on the ground in Seattle. and be highly cohesive in the focus. that right now the because of the security, the hairs on fire One of the things I'm and there's going to be an One of the things that and I think it's going to accelerate. and the executives at One of the things today that struck me at the sessions was One of the things that'll be great Yeah, and I love the And I think having the kind of putting the developers for the developer to go faster will win. the ability to go faster I think it's going to be Talk to me about that. I think that's going to be One of the things too that So clearly the CloudNative and the hackers are on the offense. So that to me means that the and make it more complex for the hacker. Increase the cost. and experts to bring to table. Will be a integral and the customers and the getting the highlights. get the best content. for the opportunity and looking We'll see you at Amsterdam. and we appreciate your time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

DavePERSON

0.99+

Chris AniszczykPERSON

0.99+

Steve SchmidtPERSON

0.99+

John FurrierPERSON

0.99+

SeattleLOCATION

0.99+

Dave VellantePERSON

0.99+

LisaPERSON

0.99+

Liz RicePERSON

0.99+

JohnPERSON

0.99+

Palo AltoLOCATION

0.99+

GermanyLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

NickPERSON

0.99+

AWSORGANIZATION

0.99+

AmsterdamLOCATION

0.99+

AmazonORGANIZATION

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

FebruaryDATE

0.99+

72 sessionsQUANTITY

0.99+

two dayQUANTITY

0.99+

742%QUANTITY

0.99+

AprilDATE

0.99+

Madrona VenturesORGANIZATION

0.99+

2010DATE

0.99+

DecemberDATE

0.99+

early February, 2023DATE

0.99+

GoogleORGANIZATION

0.99+

BostonLOCATION

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBE CenterORGANIZATION

0.98+

CUBECONEVENT

0.98+

oneQUANTITY

0.98+

13 years agoDATE

0.98+

CUBEORGANIZATION

0.98+

CloudNative Security CON 23EVENT

0.98+

Cube CenterORGANIZATION

0.98+

firstQUANTITY

0.98+

this yearDATE

0.98+

CNCFORGANIZATION

0.98+

CUBE CONEVENT

0.98+

Palo Alto NetworksORGANIZATION

0.97+

KanePERSON

0.97+

Nir ZukPERSON

0.97+

13th yearQUANTITY

0.96+

CloudNativeORGANIZATION

0.94+

Boston StudioLOCATION

0.94+

TheCUBESORGANIZATION

0.94+

BPFORGANIZATION

0.94+

theCUBEORGANIZATION

0.93+

Day 1 Keynote Analysis | CloudNativeSecurityCon 23


 

(upbeat music) >> Hey everyone and welcome to theCUBE's coverage day one of CloudNativeSecurityCon '23. Lisa Martin here with John Furrier and Dave Vellante. Dave and John, great to have you guys on the program. This is interesting. This is the first inaugural CloudNativeSecurityCon. Formally part of KubeCon, now a separate event here happening in Seattle over the next couple of days. John, I wanted to get your take on, your thoughts on this being a standalone event, the community, the impact. >> Well, this inaugural event, which is great, we love it, we want to cover all inaugural events because you never know, there might not be one next year. So we were here if it happens, we're here at creation. But I think this is a good move for the CNCF and the Linux Foundation as security becomes so important and there's so many issues to resolve that will influence many other things. Developers, machine learning, data as code, supply chain codes. So I think KubeCon, Kubernetes conference and CloudNativeCon, is all about cloud native developers. And it's a huge event and there's so much there. There's containers, there's microservices, all that infrastructure's code, the DevSecOps on that side, there's enough there and it's a huge ecosystem. Pulling it as a separate event is a first move for them. And I think there's a toe in the water kind of vibe here. Testing the waters a little bit on, does this have legs? How is it organized? Looks like they took their time, thought it out extremely well about how to craft it. And so I think this is the beginning of what will probably be a seminal event for the open source community. So let's listen to the clip from Priyanka Sharma who's a CUBE alumni and executive director of the CNCF. This is kind of a teaser- >> We will tackle issues of security together here and further on. We'll share our experiences, successes, perhaps more importantly, failures, and help with the collecting of understanding. We'll create solutions. That's right. The practitioners are leading the way. Having conversations that you need to have. That's all of you. This conference today and tomorrow is packed with 72 sessions for all levels of technologists to reflect the bottoms up, developer first nature of the conference. The co-chairs have selected these sessions and they are true blue practitioners. >> And that's a great clip right there. If you read between the lines, what she's saying there, let's unpack this. Solutions, we're going to fail, we're going to get better. Linux, the culture of iterating. But practitioners, the mention of practitioners, that was very key. Global community, 72 sessions, co-chairs, Liz Rice and experts that are crafting this program. It seems like very similar to what AWS has done with re:Invent as their core show. And then they have re:Inforce which is their cloud native security, Amazon security show. There's enough there, so to me, practitioners, that speaks to the urgency of cloud native security. So to me, I think this is the first move, and again, testing the water. I like the vibe. I think the practitioner angle is relevant. It's very nerdy, so I think this is going to have some legs. >> Yeah, the other key phrase Priyanka mentioned is bottoms up. And John, at our predictions breaking analysis, I asked you to make a prediction about events. And I think you've nailed it. You said, "Look, we're going to have many more events, but they're going to be smaller." Most large events are going to get smaller. AWS is obviously the exception, but a lot of events like this, 500, 700, 1,000 people, that is really targeted. So instead of you take a big giant event and there's events within the event, this is going to be really targeted, really intimate and focused. And that's exactly what this is. I think your prediction nailed it. >> Well, Dave, we'll call to see the event operating system really cohesive events connected together, decoupled, and I think the Linux Foundation does an amazing job of stringing these events together to have community as the focus. And I think the key to these events in the future is having, again, targeted content to distinct user groups in these communities so they can be highly cohesive because they got to be productive. And again, if you try to have a broad, big event, no one's happy. Everyone's underserved. So I think there's an industry concept and then there's pieces tied together. And I think this is going to be a very focused event, but I think it's going to grow very fast. >> 72 sessions, that's a lot of content for this small event that the practitioners are going to have a lot of opportunity to learn from. Do you guys, John, start with you and then Dave, do you think it's about time? You mentioned John, they're dipping their toe in the water. We'll see how this goes. Do you think it's about time that we have this dedicated focus out of this community on cloud native security? >> Well, I think it's definitely time, and I'll tell you there's many reasons why. On the front lines of business, there's a business model for security hackers and breaches. The economics are in favor of the hackers. That's a real reality from ransomware to any kind of breach attacks. There's corporate governance issues that's structural challenges for companies. These are real issues operationally for companies in the enterprise. And at the same time, on the tech stack side, it's been very slow movement, like glaciers in terms of security. Things like DNS, Linux kernel, there are a lot of things in the weeds in the details of the bowels of the tech world, protocol levels that just need to be refactored. And I think you're seeing a lot of that here. It was mentioned from Brian from the Linux Foundation, mentioned Dan Kaminsky who recently passed away who found that vulnerability in BIND which is a DNS construct. That was a critical linchpin. They got to fix these things and Liz Rice is talking about the Linux kernel with the extended Berkeley Packet Filtering thing. And so this is where they're going. This is stuff that needs to be paid attention to because if they don't do it, the train of automation and machine learning is going to run wild with all kinds of automation that the infrastructure just won't be set up for. So I think there's going to be root level changes, and I think ultimately a new security stack will probably be very driven by data will be emerging. So to me, I think this is definitely worth being targeted. And I think you're seeing Amazon doing the same thing. I think this is a playbook out of AWS's event focus and I think that's right. >> Dave, what are you thoughts? >> There was a lot of talk in, again, I go back to the progression here in the last decade about what's the right regime for security? Should the CISO report to the CIO or the board, et cetera, et cetera? We're way beyond that now. I think DevSecOps is being asked to do a lot, particularly DevOps. So we hear a lot about shift left, we're hearing about protecting the runtime and the ops getting much more involved and helping them do their jobs because the cloud itself has brought a lot to the table. It's like the first line of defense, but then you've really got a lot to worry about from a software defined perspective. And it's a complicated situation. Yes, there's less hardware, yes, we can rely on the cloud, but culturally you've got a lot more people that have to work together, have to share data. And you want to remove the blockers, to use an Amazon term. And the way you do that is you really, if we talked about it many times on theCUBE. Do over, you got to really rethink the way in which you approach security and it starts with culture and team. >> Well the thing, I would call it the five C's of security. Culture, you mentioned that's a good C. You got cloud, tons of issues involved in cloud. You've got access issues, identity. you've got clusters, you got Kubernetes clusters. And then you've got containers, the fourth C. And then finally is the code itself, supply chain. So all areas of cloud native, if you take out culture, it's cloud, cluster, container, and code all have levels of security risks and new things in there that need to be addressed. So there's plenty of work to get done for sure. And again, this is developer first, bottoms up, but that's where the change comes in, Dave, from a security standpoint, you always point this out. Bottoms up and then middle out for change. But absolutely, the imperative is today the business impact is real and it's urgent and you got to pedal as fast as you can here, so I think this is going to have legs. We'll see how it goes. >> Really curious to understand the cultural impact that we see being made at this event with the focus on it. John, you mentioned the four C's, five with culture. I often think that culture is probably the leading factor. Without that, without getting those teams aligned, is the rest of it set up to be as successful as possible? I think that's a question that's- >> Well to me, Dave asked Pat Gelsinger in 2014, can security be a do-over at VMWorld when he was the CEO of VMware? He said, "Yes, it has to be." And I think you're seeing that now. And Nick from the co-founder of Palo Alto Networks was quoted on theCUBE by saying, "Zero Trust is some structure to give to security, but cloud allows for the ability to do it over and get some scale going on security." So I think the best people are going to come together in this security world and they're going to work on this. So you're going to start to see more focus around these security events and initiatives. >> So I think that when you go to the, you mentioned re:Inforce a couple times. When you go to re:Inforce, there's a lot of great stuff that Amazon puts forth there. Very positive, it's not that negative. Oh, the world is falling, the sky is falling. And so I like that. However, you don't walk away with an understanding of how they're making the CISOs and the DevOps lives easier once they get beyond the cloud. Of course, it's not Amazon's responsibility. And that's where I think the CNCF really comes in and open source, that's where they pick up. Obviously the cloud's involved, but there's a real opportunity to simplify the lives of the DevSecOps teams and that's what's critical in terms of being able to solve, or at least keep up with this never ending problem. >> Yeah, there's a lot of issues involved. I took some notes here from some of the keynote you heard. Security and education, training and team structure. Detection, incidents that are happening, and how do you respond to that architecture. Identity, isolation, supply chain, and governance and compliance. These are all real things. This is not like hand-waving issues. They're mainstream and they're urgent. Literally the houses are on fire here with the enterprise, so this is going to be very, very important. >> Lisa: That's a great point. >> Some of the other things Priyanka mentioned, exposed edges and nodes. So just when you think we're starting to solve the problem, you got IOT, security's not a one and done task. We've been talking about culture. No person is an island. It's $188 billion business. Cloud native is growing at 27% a year, which just underscores the challenges, and bottom line, practitioners are leading the way. >> Last question for you guys. What are you hoping those practitioners get out of this event, this inaugural event, John? >> Well first of all, I think this inaugural event's going to be for them, but also we at theCUBE are going to be doing a lot more security events. RSA's coming up, we're going to be at re:Inforce, we're obviously going to be covering this event. We've got Black Hat, a variety of other events. We'll probably have our own security events really focused on some key areas. So I think the thing that people are going to walk away from this event is that paying attention to these security events are going to be more than just an industry thing. I think you're going to start to see group gatherings or groups convening virtually and physically around core issues. And I think you're going to start to see a community accelerate around cloud native and open source specifically to help teams get faster and better at what they do. So I think the big walkaway for the customers and the practitioners here is that there's a call to arms happening and this is, again, another signal that it's worth breaking out from the core event, but being tied to it, I think that's a good call and I think it's a well good architecture from a CNCF standpoint and a worthy effort, so I give it a thumbs up. We still don't know what it's going to look like. We'll see what day two looks like, but it seems to be experts, practitioners, deep tech, enabling technologies. These are things that tend to be good things to hear when you're at an event. I'll say the business imperative is obvious. >> The purpose of an event like this, and it aligns with theCUBE's mission, is to educate and inspire business technology pros to action. We do it in theCUBE with free content. Obviously this event is a for-pay event, but they are delivering some real value to the community that they can take back to their organizations to make change. And that's what it's all about. >> Yep, that is what it's all about. I'm looking forward to seeing over as the months unfold, the impact that this event has on the community and the impact the community has on this event going forward, and really the adoption of cloud native security. Guys, great to have you during this keynote analysis. Looking forward to hearing the conversations that we have on theCUBE today. Thanks so much for joining. And for my guests, for my co-hosts, John Furrier and Dave Vellante. I'm Lisa Martin. You're watching theCUBE's day one coverage of CloudNativeSecurityCon '23. Stick around, we got great content on theCUBE coming up. (upbeat music)

Published Date : Feb 2 2023

SUMMARY :

Dave and John, great to have And so I think this is the beginning nature of the conference. this is going to have some legs. this is going to be really targeted, And I think the key to these a lot of opportunity to learn from. and machine learning is going to run wild Should the CISO report to the CIO think this is going to have legs. is the rest of it set up to And Nick from the co-founder and the DevOps lives easier so this is going to be to solve the problem, you got IOT, of this event, this inaugural event, John? from the core event, but being tied to it, to the community that they can take back Guys, great to have you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Liz RicePERSON

0.99+

Dan KaminskyPERSON

0.99+

DavePERSON

0.99+

Priyanka SharmaPERSON

0.99+

AmazonORGANIZATION

0.99+

PriyankaPERSON

0.99+

LisaPERSON

0.99+

SeattleLOCATION

0.99+

John FurrierPERSON

0.99+

Pat GelsingerPERSON

0.99+

2014DATE

0.99+

AWSORGANIZATION

0.99+

NickPERSON

0.99+

BrianPERSON

0.99+

$188 billionQUANTITY

0.99+

John FurrierPERSON

0.99+

72 sessionsQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

tomorrowDATE

0.99+

KubeConEVENT

0.99+

500QUANTITY

0.99+

fiveQUANTITY

0.99+

Linux kernelTITLE

0.99+

CUBEORGANIZATION

0.99+

LinuxTITLE

0.99+

first lineQUANTITY

0.98+

VMWorldORGANIZATION

0.98+

next yearDATE

0.98+

todayDATE

0.98+

700QUANTITY

0.97+

first moveQUANTITY

0.97+

CloudNativeSecurityConEVENT

0.97+

CloudNativeSecurityCon '23EVENT

0.96+

firstQUANTITY

0.96+

DevSecOpsTITLE

0.96+

27% a yearQUANTITY

0.96+

CloudNativeConEVENT

0.96+

theCUBEORGANIZATION

0.95+

1,000 peopleQUANTITY

0.93+

last decadeDATE

0.93+

day oneQUANTITY

0.93+

fourQUANTITY

0.91+

day twoQUANTITY

0.89+

Zero TrustORGANIZATION

0.87+

Black HatEVENT

0.83+

DevOpsTITLE

0.81+

Day 1QUANTITY

0.8+

first natureQUANTITY

0.79+

CloudNativeSecurityCon 23EVENT

0.78+

fourth C.QUANTITY

0.77+

next couple of daysDATE

0.76+

BINDTITLE

0.76+

oneQUANTITY

0.74+

KubernetesEVENT

0.73+

Liz Rice, Isovalent | CloudNativeSecurityCon 23


 

(upbeat music) >> Hello, everyone, from Palo Alto, Lisa Martin here. This is The Cube's coverage of CloudNativeSecurityCon, the inaugural event. I'm here with John Furrier in studio. In Boston, Dave Vellante joins us, and our guest, Liz Rice, one of our alumni, is joining us from Seattle. Great to have everyone here. Liz is the Chief Open Source officer at Isovalent. She's also the Emeritus Chair Technical Oversight Committee at CNCF, and a co-chair of this new event. Everyone, welcome Liz. Great to have you back on theCUBE. Thanks so much for joining us today. >> Thanks so much for having me, pleasure. >> So CloudNativeSecurityCon. This is the inaugural event, Liz, this used to be part of KubeCon, it's now its own event in its first year. Talk to us about the importance of having it as its own event from a security perspective, what's going on? Give us your opinions there. >> Yeah, I think security was becoming so- at such an important part of the conversation at KubeCon, CloudNativeCon, and the TAG security, who were organizing the co-located Cloud Native Security Day which then turned into a two day event. They were doing this amazing job, and there was so much content and so much activity and so much interest that it made sense to say "Actually this could stand alone as a dedicated event and really dedicate, you know, all the time and resources of running a full conference, just thinking about cloud native security." And I think that's proven to be true. There's plenty of really interesting talks that we're going to see. Things like a capture the flag. There's all sorts of really good things going on this week. >> Liz, great to see you, and Dave, great to see you in Boston Lisa, great intro. Liz, you've been a CUBE alumni. You've been a great contributor to our program, and being part of our team, kind of extracting that signal from the CNCF cloud native world KubeCon. This event really kind of to me is a watershed moment, because it highlights not only security as a standalone discussion event, but it's also synergistic with KubeCon. And, as co-chair, take us through the thought process on the sessions, the experts, it's got a practitioner vibe there. So we heard from Priyanka early on, bottoms up, developer first. You know KubeCon's shift left was big momentum. This seems to be a breakout of very focused security. Can you share the rationale and the thoughts behind how this is emerging, and how you see this developing? I know it's kind of a small event, kind of testing the waters it seems, but this is really a directional shift. Can you share your thoughts? >> Yeah I'm just, there's just so many different angles that you can consider security. You know, we are seeing a lot of conversations about supply chain security, but there's also runtime security. I'm really excited about eBPF tooling. There's also this opportunity to talk about how do we educate people about security, and how do security practitioners get involved in cloud native, and how do cloud native folks learn about the security concepts that they need to keep their deployments secure. So there's lots of different groups of people who I think maybe at a KubeCon, KubeCon is so wide, it's such a diverse range of topics. If you really just want to focus in, drill down on what do I need to do to run Kubernetes and cloud native applications securely, let's have a really focused event, and just drill down into all the different aspects of that. And I think that's great. It brings the right people together, the practitioners, the experts, the vendors to, you know, everyone can be here, and we can find each other at a smaller event. We are not spread out amongst the thousands of people that would attend a KubeCon. >> It's interesting, Dave, you know, when we were talking, you know, we're going to bring you in real quick, because AWS, which I think is the bellweather for, you know, cloud computing, has now two main shows, AWS re:Invent and re:Inforce. Security, again, broken out there. you see the classic security events, RSA, Black Hat, you know, those are the, kind of, the industry kind of mainstream security, very wide. But you're starting to see the cloud native developer first with both security and cloud native, kind of, really growing so fast. This is a major trend for a lot of the ecosystem >> You know, and you hear, when you mention those other conferences, John you hear a lot about, you know, shift left. There's a little bit of lip service there, and you, we heard today way more than lip service. I mean deep practitioner level conversations, and of course the runtime as well. Liz, you spent a lot of time obviously in your keynote on eBPF, and I wonder if you could share with the audience, you know, why you're so excited about that. What makes it a more effective tool compared to other traditional methods? I mean, it sounds like it simplifies things. You talked about instrumenting nodes versus workloads. Can you explain that a little bit more detail? >> Yeah, so with eBPF programs, we can load programs dynamically into the kernel, and we can attach them to all kinds of different events that could be happening anywhere on that virtual machine. And if you have the right knowledge about where to hook into, you can observe network events, you can observe file access events, you can observe pretty much anything that's interesting from a security perspective. And because eBPF programs are living in the kernel, there's only one kernel shared amongst all of the applications that are running on that particular machine. So you don't- you no longer have to instrument each individual application, or each individual pod. There's no more need to inject sidecars. We can apply eBPF based tooling on a per node basis, which just makes things operationally more straightforward, but it's also extremely performant. We can hook these programs into events that typically very lightweight, small programs, kind of, emitting an event, making a decision about whether to drop a packet, making a decision about whether to allow file access, things of that nature. There's super fast, there's no need to transition between kernel space and user space, which is usually quite a costly operation from performance perspective. So eBPF makes it really, you know, it's taking the security tooling, and other forms of tooling, networking and observability. We can take these tools into the kernel, and it's really efficient there. >> So Liz- >> So, if I may, one, just one quick follow up. You gave kind of a space age example (laughs) in your keynote. When, do you think a year from now we'll be able to see, sort of, real world examples in in action? How far away are we? >> Well, some of that is already pretty widely deployed. I mean, in my keynote I was talking about Cilium. Cilium is adopted by hundreds of really big scale deployments. You know, the users file is full of household names who've been using cilium. And as part of that they will be using network policies. And I showed some visualizations this morning of network policy, but again, network policy has been around, pretty much since the early days of Kubernetes. It can be quite fiddly to get it right, but there are plenty of people who are using it at scale today. And then we were also looking at some runtime security detections, seeing things like, in my example, exfiltrating the plans to the Death Star, you know, looking for suspicious executables. And again, that's a little bit, it's a bit newer, but we do have people running that in production today, proving that it really does work, and that eBPF is a scalable technology. It's, I've been fascinated by eBPF for years, and it's really amazing to see it being used in the real world now. >> So Liz, you're a maintainer on the Cilium project. Talk about the use of eBPF in the Cilium project. How is it contributing to cloud native security, and really helping to change the dials on that from an efficiency, from a performance perspective, as well as a, what's in it for me as a business perspective? >> So Cilium is probably best known as a networking plugin for Kubernetes. It, when you are running Kubernetes, you have to make a decision about some networking plugin that you're going to use. And Cilium is, it's an incubating project in the CNCF. It's the most mature of the different CNIs that's in the CNCF at the moment. As I say, very widely deployed. And right from day one, it was based on eBPF. And in fact some of the people who contribute to the eBPF platform within the kernel, are also working on the Cilium project. They've been kind of developed hand in hand for the last six, seven years. So really being able to bring some of that networking capability, it required changes in the kernel that have been put in place several years ago, so that now we can build these amazing tools for Kubernetes operators. So we are using eBPF to make the networking stack for Kubernetes and cloud native really efficient. We can bypass some of the parts of the network stack that aren't necessarily required in a cloud native deployment. We can use it to make these incredibly fast decisions about network policy. And we also have a sub-project called Tetragon, which is a newer part of the Cilium family which uses eBPF to observe these runtime events. The things like people opening a file, or changing the permissions on a file, or making a socket connection. All of these things that as a security engineer you are interested in. Who is running executables who is making network connections, who's accessing files, all of these operations are things that we can observe with Cilium Tetragon. >> I mean it's exciting. We've chatted in the past about that eBPF extended Berkeley Packet Filter, which is about the Linux kernel. And I bring that up Liz, because I think this is the trend I'm trying to understand with this event. It's, I hear bottoms up developer, developer first. It feels like it's an under the hood, infrastructure, security geek fest for practitioners, because Brian, in his keynote, mentioned BIND in reference the late Dan Kaminsky, who was, obviously found that error in BIND at the, in DNS. He mentioned DNS. There's a lot of things that's evolving at the silicone, kernel, kind of root levels of our infrastructure. This seems to be a major shift in focus and rightfully so. Is that something that you guys talk about, or is that coincidence, or am I just overthinking this point in terms of how nerdy it's getting in terms of the importance of, you know, getting down to the low level aspects of protecting everything. And as we heard also the quote was no software secure. (Liz chuckles) So that's up and down the stack of the, kind of the old model. What's your thoughts and reaction to that? >> Yeah, I mean I think a lot of folks who get into security really are interested in these kind of details. You know, you see write-ups of exploits and they, you know, they're quite often really involved, and really require understanding these very deep detailed technical levels. So a lot of us can really geek out about the details of that. The flip side of that is that as an application developer, you know, as- if you are working for a bank, working for a media company, you're writing applications, you shouldn't have to be worried about what's happening at the kernel level. This might be kind of geeky interesting stuff, but really, operationally, it should be taken care of for you. You've got your work cut out building business value in applications. So I think there's this interesting, kind of dual track going on almost, if you like, of the people who really want to get involved in those nitty gritty details, and understand how the underlying, you know, kernel level exploits maybe working. But then how do we make that really easy for people who are running clusters to, I mean like you said, nothing is ever secure, but trying to make things as secure as they can be easily, and make things visual, make things accessible, make things, make it easy to check whether or not you are compliant with whatever regulations you need to be compliant with. That kind of focus on making things usable for the platform team, for the application developers who deliver apps on the platform, that's the important (indistinct)- >> I noticed that the word expert was mentioned, I mentioned earlier with Priyanka. Was there a rationale on the 72 sessions, was there thinking around it or was it kind of like, these are urgent areas, they're obvious low hanging fruit. Was there, take us through the selection process of, or was it just, let's get 72 sessions going to get this (Liz laughs) thing moving? >> No, we did think quite carefully about how we wanted to, what the different focus areas we wanted to include. So we wanted to make sure that we were including things like governance and compliance, and that we talk about not just supply chain, which is clearly a very hot topic at the moment, but also to talk about, you know, threat detection, runtime security. And also really importantly, we wanted to have space to talk about education, to talk about how people can get involved. Because maybe when we talk about all these details, and we get really technical, maybe that's, you know, a bit scary for people who are new into the cloud native security space. We want to make sure that there are tracks and content that are accessible for newcomers to get involved. 'Cause, you know, given time they'll be just as excited about diving into those kind of kernel level details. But everybody needs a place to start, and we wanted to make sure there were conversations about how to get started in security, how to educate other members of your team in your organization about security. So hopefully there's something for everyone. >> That education piece- >> Liz, what's the- >> Oh sorry, Dave. >> What the buzz on on AI? We heard Dan talk about, you know, chatGPT, using it to automate spear phishing. There's always been this tension between security and speed to market, but CISOs are saying, "Hey we're going to a zero trust architecture and that's helping us move faster." Will, in your, is the talk on the floor, AI is going to slow us down a little bit until we figure it out? Or is it actually going to be used as an offensive defensive tool if I can use that angle? >> Yeah, I think all of the above. I actually had an interesting chat this morning. I was talking with Andy Martin from Control Plane, and we were talking about the risk of AI generated code that attempts to replicate what open source libraries already do. So rather than using an existing open source package, an organization might think, "Well, I'll just have my own version, and I'll have an AI write it for me." And I don't, you know, I'm not a lawyer so I dunno what the intellectual property implications of this will be, but imagine companies are just going, "Well you know, write me an SSL library." And that seems terrifying from a security perspective, 'cause there could be all sorts of very slightly different AI generated libraries that pick up the same vulnerabilities that exist in open source code. So, I think we're going to go through a pretty interesting period of vulnerabilities being found in AI generated code that look familiar, and we'll be thinking "Haven't we seen these vulnerabilities before? Yeah, we did, but they were previously in handcrafted code and now we'll see the same things being generated by AI." I mean, in the same way that if you look at an AI generated picture and it's got I don't know, extra fingers, or, you know, extra ears or something that, (Dave laughs) AI does make mistakes. >> So Liz, you talked about the education, the enablement, the 72 sessions, the importance of CloudNativeSecurityCon being its own event this year. What are your hopes and dreams for the practitioners to be able to learn from this event? How do you see the event as really supporting the growth, the development of the cloud native security community as a whole? >> Yeah, I think it's really important that we think of it as a Cloud Native Security community. You know, there are lots of interesting sort of hacker community security related community. Cloud native has been very community focused for a long time, and we really saw, particularly through the tag, the security tag, that there was this growing group of people who were, really wanted to work at that intersection between security and cloud native. And yeah, I think things are going really well this week so far, So I hope this is, you know, the first of many additions of this conference. I think it will also be interesting to see how the balance between a smaller, more focused event, compared to the giant KubeCon and cloud native cons. I, you know, I think there's space for both things, but whether or not there will be other smaller focus areas that want to stand alone and justify being able to stand alone as their own separate conferences, it speaks to the growth of cloud native in general that this is worthwhile doing. >> Yeah. >> It is, and what also speaks to, it reminds me of our tagline here at theCUBE, being able to extract the signal from the noise. Having this event as a standalone, being able to extract the value in it from a security perspective, that those practitioners and the community at large is going to be able to glean from these conversations is something that will be important, that we'll be keeping our eyes on. >> Absolutely. Makes sense for me, yes. >> Yeah, and I think, you know, one of the things, Lisa, that I want to get in, and if you don't mind asking Dave his thoughts, because he just did a breaking analysis on the security landscape. And Dave, you know, as Liz talking about some of these root level things, we talk about silicon advances, powering machine learning, we've been covering a lot of that. You've been covering the general security industry. We got RSA coming up reinforced with AWS, and as you see the cloud native developer first, really driving the standards of the super cloud, the multicloud, you're starting to see a lot more application focus around latency and kind of controlling that, These abstraction layer's starting to see a lot more growth. What's your take, Dave, on what Liz and- is talking about because, you know, you're analyzing the horses on the track, and there's sometimes the old guard security folks, and you got open source continuing to kick butt. And even on the ML side, we've been covering some of these foundation models, you're seeing a real technical growth in open source at all levels and, you know, you still got some proprietary machine learning stuff going on, but security's integrating all that. What's your take and your- what's your breaking analysis on the security piece here? >> I mean, to me the two biggest problems in cyber are just the lack of talent. I mean, it's just really hard to find super, you know, deep expertise and get it quickly. And I think the second is it's just, it's so many tools to deal with. And so the architecture of security is just this mosaic and a mess. That's why I'm excited about initiatives like eBPF because it does simplify things, and developers are being asked to do a lot. And I think one of the other things that's emerging is when you- when we talk about Industry 4.0, and IIoT, you- I'm seeing a lot of tools that are dedicated just to that, you know, slice of the world. And I don't think that's the right approach. I think that there needs to be a more comprehensive view. We're seeing, you know, zero trust architectures come together, and it's going to take some time, but I think that you're going to definitely see, you know, some rethinking of how to architect security. It's a game of whack-a-mole, but I think the industry is just- the technology industry is doing a really really good job of, you know, working hard to solve these problems. And I think the answer is not just another bespoke tool, it's a broader thinking around architectures and consolidating some of those tools, you know, with an end game of really addressing the problem in a more comprehensive fashion. >> Liz, in the last minute or so we have your thoughts on how automation and scale are driving some of these forcing functions around, you know, taking away the toil and the muck around developers, who just want stuff to be code, right? So infrastructure as code. Is that the dynamic here? Is this kind of like new, or is it kind of the same game, different kind of thing? (chuckles) 'Cause you're seeing a lot more machine learning, a lot more automation going on. What's, is that having an impact? What's your thoughts? >> Automation is one of the kind of fundamental underpinnings of cloud native. You know, we're expecting infrastructure to be written as code, We're expecting the platform to be defined in yaml essentially. You know, we are expecting the Kubernetes and surrounding tools to self-heal and to automatically scale and to do things like automated security. If we think about supply chain, you know, automated dependency scanning, think about runtime. Network policy is automated firewalling, if you like, for a cloud native era. So, I think it's all about making that platform predictable. Automation gives us some level of predictability, even if the underlying hardware changes or the scale changes, so that the application developers have something consistent and standardized that they can write to. And you know, at the end of the day, it's all about the business applications that run on top of this infrastructure >> Business applications and the business outcomes. Liz, we so appreciate your time talking to us about this inaugural event, CloudNativeSecurityCon 23. The value in it for those practitioners, all of the content that's going to be discussed and learned, and the growth of the community. Thank you so much, Liz, for sharing your insights with us today. >> Thanks for having me. >> For Liz Rice, John Furrier and Dave Vellante, I'm Lisa Martin. You're watching the Cube's coverage of CloudNativeSecurityCon 23. (electronic music)

Published Date : Feb 2 2023

SUMMARY :

Great to have you back on theCUBE. This is the inaugural event, Liz, and the TAG security, kind of testing the waters it seems, that you can consider security. the bellweather for, you know, and of course the runtime as well. of the applications that are running You gave kind of a space exfiltrating the plans to the Death Star, and really helping to change the dials of the network stack that in terms of the importance of, you know, of the people who really I noticed that the but also to talk about, you know, We heard Dan talk about, you know, And I don't, you know, I'm not a lawyer for the practitioners to be you know, the first of many and the community at large Yeah, and I think, you know, hard to find super, you know, Is that the dynamic here? so that the application developers all of the content that's going of CloudNativeSecurityCon 23.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KaminskyPERSON

0.99+

BrianPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

Liz RicePERSON

0.99+

Andy MartinPERSON

0.99+

Liz RicePERSON

0.99+

SeattleLOCATION

0.99+

LizPERSON

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

DanPERSON

0.99+

LisaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

two dayQUANTITY

0.99+

72 sessionsQUANTITY

0.99+

PriyankaPERSON

0.99+

eBPFTITLE

0.99+

CNCFORGANIZATION

0.99+

CloudNativeSecurityConEVENT

0.99+

Control PlaneORGANIZATION

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

CloudNativeConEVENT

0.99+

Cloud Native Security DayEVENT

0.99+

CUBEORGANIZATION

0.99+

CiliumTITLE

0.99+

secondQUANTITY

0.99+

Boston LisaLOCATION

0.99+

oneQUANTITY

0.99+

each individual applicationQUANTITY

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

CloudNativeSecurityCon 23EVENT

0.98+

hundredsQUANTITY

0.97+

each individual podQUANTITY

0.97+

both thingsQUANTITY

0.97+

first yearQUANTITY

0.97+

TetragonTITLE

0.97+

BINDORGANIZATION

0.96+

this weekDATE

0.96+

Michael Ferranti, Teleport | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents Koon and cloud native con Europe, 2022, brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain and CubeCon cloud native con Europe, 2022 I'm cube Townsend, along with Paul Gill, senior editor, enterprise architecture at Silicon angle. We are talking to some incredible folks this week, continuing the conversation around enabling developers to do their work. Paul you've said that this conference is about developers. What are you finding key as a theme running throughout the show >>That that developers really need a whole set of special tools. You know, it's not the end user, the end user tools, the end user access controls the authentication it's developers need a need their own to live their in their own environment. They need their own workflow tools, their own collaboration and their own security. And that's where teleport comes in. >>So speaking of teleport, we have Michael fork, chief marking our officer at teleport new world role for you. First, tell me about how long have you been at teleport now >>Going on seven or eight months now, >>Seven or eight months in this fast moving market. I'm I'm going to tell you a painful experience I've had in this new world. We've built applications. We've moved fast audits come in. The auditors have come in and they said, you know what, who authorized this change to the cluster? And we'll go into the change ticket and say, this person authorized the changes and the change ticket. And then they'll ask for trace back. Okay. Show me the change. What do it mean? Show you the changes. It just happened. >>Yeah. Check, check GitHub. >>Yeah, check GI, get, see, we, we, we, we said we were gonna make the changes, the change happen. That's not enough. What are CU, how are you helping customers solve this access control and audit problem? >>Yeah, that's a great question. There're kind of, there're kind of two, two sides to the puzzle. And actually I think that the intro hits it. Well, you you've talked about kind of developer experience needing needing tools to more efficiently do the job as a practitioner. And you're coming at it from kind of a security and compliance angle. And there's a tension between both of those teams. It's like, you know, there's, there's a tension between dev and ops before we created DevOps. There's also a tension between kind of security teams and developers. So we've created dev SecOps. What that means is you need an easy way for developers to get access, access to the resources they needed through their jobs. That's, you know, Linux hosts and databases and Kubernetes clusters and, you know, monitoring dashboards and managing all of those credentials is quite cumbersome. If I need to access a dozen systems, then you know, I'm using SSH keys to access this. >>I have admin credentials for my database. I I'm going through a VPN to access an internal dashboard, teleport, consolidates, all of that access into a single login via your identity provider, Okta active directory, but then on the security and compliance side, we make it really easy for that compliance officer. When they say, show me that change, we have all of the audit logs. That's that show exactly what changes Keith made when he logged into, into that system. And in fact, one of the booths behind here is talking about E B P F a modern way to get that kind of kernel level grade granularity. We build all of that observability into teleport to make the security and compliance teams happy. And the engineering teams a lot more productive. >>Where do the, the access control tools like Okta, you mentioned fall short. I mean, why, why is there a need for your level of, of control at the control plane? >>Yeah. When you, when you start to talk about authorization, authentication, audit at the infrastructure level, each of these technologies has its own way of managing what kind of in, in the jargon often and Ze, right? Authentication authorization. So you have SSH for, for Linux. Kubernetes has its own way of doing authorization. All of the database providers have their own way and it's quite complicated, right? It's, it's much different. So, you know, if I'm gonna access office 365 or I'm gonna a access Salesforce, right. I'm really talking about the HTTP protocol. It's relatively trivial to implement single sign on for web-based applications. But when we start talking about things that are happening at the Linux kernel level, or with Kubernetes, it's quite complicated to build those integrations. And that's where teleport extends what you have with your IDP. So for instance, Okta, lots of our customers use Okta as their identity provider, but then teleport takes those roles and applies them and enforces them at the actual infrastructure level. >>So if I'm a lay developer, I'm looking at this thinking, you know, I, I have service mesh, I've implemented link D SEO or something to that level. And I also have Ansible and Ansible has security, etcetera. What, what role, or how does that integrate to all together from a big picture perspective? >>Yeah. So >>What, one of the, kind of the meta themes at teleport is we, we like to, we like to say that we are fighting complexity cuz as we build new technologies, we tend to run the new tech on top of the old tech. Whereas for instance, when you buy a new car, you typically don't, you know, hook the old car to the back and then pull it around with you. Right? We, we replace old technology with new technology, but in infrastructure that doesn't happen as often. And so you end up with kind of layers of complexity with one protocol sitting on top of another protocol on top of another protocol. And what teleport does is for the access control plane, we, we kind of replace the legacy ways of doing authentication authorization and audit with a new modern experience. But we allow you to continue to use the existing tools. >>So we don't replace, for instance, you know, your configuration management system, you can keep using Ansible or, or salt or Jenkins, but teleport now is gonna give those, those scripts or those pipelines in identity that you can define. What, what should Ansible be able to do? Right? If, cuz people are worried about supply chain attacks, if a, if a vulnerable dependency gets introduced into your supply chain pipeline and your kind of Ansible playbook goes crazy and starts deploying that vulnerability everywhere, that's probably something you wanna limit with teleport. You can limit that with an identity, but you can still use the tools that you're, that you're used to. >>So how do I guarantee something like an ex-employee doesn't come in and, and initiate Ansible script that was sitting in the background just waiting to happen until, you know, they left. >>Yeah. Great question. It's there's kind of the, the, the great resignation that's happening. We did a survey where actually we asked the question kind of, you know, can you guarantee that X employees can no longer access your infrastructure? And shockingly like 89% of companies could not guarantee that it's like, wow, that's like that should, that should be a headline somewhere. And we actually just learned that there are on the dark web, there are people that are targeting current employees of Netflix and Uber and trying to buy credentials of those employees to the infrastructure. So it's a big problem with teleport. We solve this in a really easy, transparent way for developers. Everything that we do is based on short lift certificates. So unlike a SSH key, which exists until you decommission it, shortlist certificates by, by default expire. And if you don't reissue them based on a new login based on the identity, then, then you can't do anything. So even a stolen credential kind of the it's value decreases dramatically over time. >>So that statistic or four out of five companies can't guarantee X employees can't access infrastructure. Why is simply removing the employee from the, you know, from the L app or directory decommissioning their login credentials. Why is that not sufficient? >>Well, it, it depends on if everything is integrated into your identity provider and because of the complexities of accessing infrastructure, we know that developers are creative people. And by, by kind of by definition, they're able to create systems to make their lives easier. So one thing that we see developers doing is kind of copying an SSH key to a local notepad on, on their computer. So they essentially can take that credential out of a vault. They can put it somewhere that's easier for them to access. And if you're not rotating that credential, then I can also, you know, copy it to a, to a personal device as well. Same thing for shared admin credentials. So the, the, the issue is that those credentials are not completely managed in a unified way that enables the developer to not go around the system in order to make their lives easier. >>But rather to actually use the system, there's a, there's a market called privilege access management that a lot of enterprises are using to kind of manage credentials for their developers, but it's notoriously disruptive to developer workflows. And so developers kind of go around the system in order to make their jobs easier. What teleport does is we obviate the need to go around the system, cuz the simplest thing is just to come in in the morning, log in one time to my identity provider. And now I have access to all of my servers, all of my databases, all of my Kubernetes clusters with a short lift certificate, that's completely transparent. And does >>This apply to, to your, both your local and your cloud accounts? >>Yes. Yes, exactly. >>So as a security company, what's driving the increase in security breaches. Is it the lack of developer hygiene? Is it this ex-employee great resignation bill. Is it external intruders? What's driving security breaches today. >>Yes. >>It's you know, it's, it's all of those things. I think if I had to put, give you a one word answer, I would say complexity. The systems that we are building are just massively complex, right? Look at how many vendors there are at this show in order to make Kubernetes easy to use, to do what its promises. It's just, we're building very complex systems. When you build complex systems, there's a lot of back doors, we call it kind of a tax surface. And that's why for every new thing that we introduce, we also need to think about how do we remove old layers of the stack so that we can simplify so that we can consolidate and take advantage of the power of something like Kubernetes without introducing security vulnerabilities. >>One of the problems or challenges with security solutions is, you know, you there's this complexity versus flexibility knob that you, you need to be careful of. What's the deployment experience in integration experience for deploying teleport. >>Yeah, it's it, we built it to be cloud native to feel like any other kind of cloud native or Kubernetes like solution. So you basically, you deploy it using helm chart, you deploy it using containers and we take care of all of the auto configuration and auto update. So that it's just, it's, it's part of your stack and you manage it using the same automation that you use to manage everything else. That's a, that's a big kind of installation and developer experience. Part of it. If it's complex to use, then not only are developers not gonna use it. Operations teams are not gonna want to have to deal with it. And then you're left with doing things the old way, which is very unsatisfactory for everybody. >>How does Kubernetes change the security equation? Are there vulnerabilities? It introduces to the, to the stack that maybe companies aren't aware of >>Almost by definition. Yes. Kind of any new technology is gonna introduce new security vulnerabilities. That's the that's that is the result of the complexity, which is, there are things that you just don't know when you introduce new components. I think kind of all of the supply chain vulnerabilities are our way of looking at that, which is we have, you know, Kubernetes is itself built on a lot of dependencies. Those dependencies themselves could have security vulnerabilities. You might have a package that's maintained by one kind of hobbyist developer, but that's actually deployed across hundreds of thousands of applications across, across the internet. So again, it's about one understanding that that complexity exists and then saying, is there a way that we can kind of layer on a solution that provides a common layer to let us kind of avoid that complexity and say, okay, every critical action needs to be authorized with an identity that way if it's automated or if it's human, I have that level of assurance that a hacked Ansible pipeline is not going to be able to introduce vulnerabilities across my entire infrastructure. >>So one of the challenges for CIOs and CTOs, it's the lack of developer resources and another resulting pain point that compounds that issue is rework due to security audits is teleport a source of truth that when a auditor comes in to audit a, a, a, a C I C D pipeline that the developer or, or operations team can just say, Hey, here's, self-service get what you need. And come back to us with any questions or is there a second set of tools we have to use to get that audit and compliance reporting? >>Yeah, it's teleport can be that single source of truth. We can also integrate with your other systems so you can export all of the, what we call access logs. So every, every behavior that took place, every query that was run on a database, every, you know, curl command that was run on a Lennox, host, teleport is creating a log of that. And so you can go in and you can filter and you can view those, those actions within teleport. But we also integrate with other systems that, that people are using, you have its Splunk or Datadog or whatever other tool chain it's really important that we integrate, but you can also use teleport as that single source. So >>You can work with the observability suites that are now being >>Installed. Yeah, there, the, the wonderful thing about kind of an ecosystem like Kubernetes is there's a lot of standardization. You can pick your preferred tool, but under the hood, the protocols for taking a log and putting it in another system are standardized. And so we can integrate with any of the tools that developers are already using. >>So how big is teleport when I'm thinking about a, from a couple of things big as in what's the footprint and then from a developer operations team overhead, is this kind of a set and forget it, how much care feed and maintenance does it >>Need? So it's very lightweight. We basically have kind of two components. There's the, the access proxy that sits in front of your infrastructure. And that's what enables us to, you know, regardless of the complexity that sits across your multi data center footprint, your traditional applications, running on windows, your, your, your modern applications running on, you know, Linux and Kubernetes, we provide seamless access to all of that. And then there's an agent that runs on all of your hosts. And this is the part that can be deployed using yo helm or any other kind of cloud native deployment methodology that enables us to do the, the granular application level audit. For instance, what queries are actually being run on CockroachDB or on, on Postgres, you know, what, what CIS calls are running on Linnux kernel, very lightweight automation can be used to install, manage, upgrade all of it. And so from an operations perspective, kind of bringing in teleport shouldn't be any more complicated than running any application on a container. That's, that's the design goal and what we built for our customers. >>If I'm in a hybrid environment, I'm transitioning, I'm making the migration to teleport. Is this a team? Is this a solution that sits only on the Kubernetes cloud native side? Or is this something that I can trans transition to initially, and then migrate all of my applications to, as I transition to cloud native? >>Yeah. We, there are kind of, no, there are no cloud native dependencies for teleport. Meaning if you are, you're a hundred percent windows shop, then we support for instance, RDP. That's the way in which windows handles room access. If you have some applications that are running on Linux, we can support that as well. If you've got kind of the, you know, the complete opposite in the spectrum, you're doing everything, cloud native containers, Kubernetes, everything. We also support that. >>Well, Michael, I really appreciate you stopping by and sharing the teleport story. Security is becoming an obvious pain point for cloud native and container management. And teleport has a really good story around ensuring compliance and security from Licia Spain. I'm Keith towns, along with Paul Gillon and you're watching the cue, the, the leader, not the, the leader two, the high take tech coverage.

Published Date : May 19 2022

SUMMARY :

The cube presents Koon and cloud native con Europe, 2022, brought to you by red hat, What are you finding key it's developers need a need their own to live their in their own environment. how long have you been at teleport now I'm going to tell you a painful experience I've had in this new world. What are CU, how are you helping customers solve this If I need to access a dozen systems, then you know, I'm using SSH keys to access And in fact, one of the booths behind here is talking about E B P F a modern way you mentioned fall short. And that's where teleport extends what you have with your IDP. you know, I, I have service mesh, I've implemented link D SEO or And so you end up with kind of layers of complexity with one protocol So we don't replace, for instance, you know, your configuration management system, waiting to happen until, you know, they left. a new login based on the identity, then, then you can't do anything. Why is simply removing the employee from the, you know, from the L app or directory decommissioning their you know, copy it to a, to a personal device as well. And so developers kind of go around the system in order to make their jobs easier. Is it the lack of developer hygiene? I think if I had to put, give you a one word answer, One of the problems or challenges with security solutions is, you know, So you basically, you deploy it using helm chart, you deploy it using which is we have, you know, Kubernetes is itself built on a lot of dependencies. the developer or, or operations team can just say, Hey, here's, self-service get what you need. But we also integrate with other systems that, that people are using, you have its Splunk or Datadog or whatever And so we can integrate with any of the tools that developers to, you know, regardless of the complexity that sits across your multi data center footprint, Or is this something that I can trans transition to initially, and then migrate all of my applications the, you know, the complete opposite in the spectrum, you're doing everything, cloud native containers, Kubernetes, Well, Michael, I really appreciate you stopping by and sharing the teleport story.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Paul GillPERSON

0.99+

KeithPERSON

0.99+

sevenQUANTITY

0.99+

PaulPERSON

0.99+

Paul GillonPERSON

0.99+

Michael FerrantiPERSON

0.99+

NetflixORGANIZATION

0.99+

UberORGANIZATION

0.99+

89%QUANTITY

0.99+

SevenQUANTITY

0.99+

twoQUANTITY

0.99+

FirstQUANTITY

0.99+

eight monthsQUANTITY

0.99+

five companiesQUANTITY

0.99+

Michael forkPERSON

0.99+

oneQUANTITY

0.99+

one wordQUANTITY

0.99+

bothQUANTITY

0.99+

two sidesQUANTITY

0.99+

GitHubORGANIZATION

0.99+

fourQUANTITY

0.99+

KubeconORGANIZATION

0.98+

TeleportORGANIZATION

0.98+

eachQUANTITY

0.98+

one thingQUANTITY

0.98+

LinuxTITLE

0.97+

CloudnativeconORGANIZATION

0.97+

one timeQUANTITY

0.97+

singleQUANTITY

0.97+

one protocolQUANTITY

0.97+

second setQUANTITY

0.96+

two componentsQUANTITY

0.96+

KubernetesTITLE

0.96+

windowsTITLE

0.95+

single sourceQUANTITY

0.95+

this weekDATE

0.95+

OneQUANTITY

0.95+

todayDATE

0.94+

AnsibleORGANIZATION

0.94+

office 365TITLE

0.94+

2022DATE

0.93+

KoonORGANIZATION

0.92+

a dozen systemsQUANTITY

0.92+

hundreds of thousands of applicationsQUANTITY

0.92+

single loginQUANTITY

0.91+

Valencia SpainLOCATION

0.91+

PostgresORGANIZATION

0.9+

Linux kernelTITLE

0.89+

hundred percentQUANTITY

0.87+

EuropeLOCATION

0.85+

red hatORGANIZATION

0.85+

OktaORGANIZATION

0.84+

LennoxORGANIZATION

0.84+

CUORGANIZATION

0.84+

JenkinsTITLE

0.81+

SplunkORGANIZATION

0.8+

SecOpsTITLE

0.79+

teleportORGANIZATION

0.77+

SalesforceTITLE

0.75+

AnsibleTITLE

0.73+

DatadogORGANIZATION

0.73+

HTTPOTHER

0.73+

CockroachDBTITLE

0.69+

GIORGANIZATION

0.68+

OktaTITLE

0.68+

KubernetesORGANIZATION

0.66+

E B P FTITLE

0.65+

cloud native conEVENT

0.63+

Paul Cormier, Red Hat | Red Hat Summit 2022


 

>>To the Seaport in Boston, Massachusetts, everybody's buzzing. The Bruins are playing tonight. They tied it up. The Celtics tied it up last night. We're excited. We don't talk about the red Sox. Red Sox are getting struggles, but you know, we have good distractions. Paul goer is here. He's the president and chief executive officer at red hat and also a Boston fan of great to see, of course, you too. >>Nice to see you guys, you know, it's been a, it's been a while. >><laugh> yeah, we saw you, you know, online and virtually for a couple of years there, but, uh, you know, we've been doing red hat summit for a long, long time. Yeah, of course we were talking earlier. It's just much more intimate, kind of a VIP event, a few more suit jackets here. You know, I got my tie on, so I don't get too much grief. I usually get grief when I wear a tie of red hat summit, but it's a different format this year. Compressed keynotes. Your keynote was great. The new normal, sometimes we call it the new abnormal <laugh>, uh, but you know, how do you feel? >>I, I, I, I feel great. First of all, you know, combination today, virtual audience in, in house audience here today. I think we're gonna see a lot of that in the future. I mean, we designed the event around that and I, I think it, I think it played pretty well. Kudos, kudos to our team. You're right. It's, it's, it's a bit more intimate even the way it was set up, but those are the conversations we like having with our customers and our partners, much more partner centric, uh, as well right now, as well. >>You know, we were talking about, you know, hybrid cloud. It was kind of, you know, it was a good marketing term. And, but now it's, it's, it's become the real thing. I've said many times the, the definition of cloud is changing. It's expanding it's no, the cloud is no longer this remote set of services, you know, somewhere up in the cloud, it's on prem connecting to a cloud across clouds, out to the edge and you need capabilities that work everywhere. And that's what red hat did. The market's just swimming toward you. >>Yeah. I mean, you look at it, you know, I was, uh, you know, if you look at it, you know, the clouds are powerful unto themselves, right? The clouds are powerful unto themselves. They're all different. Right? And that that's, I mean, hardware vendors were, were similar, but different, same thing. You need that connective tissue across, across the whole thing. I mean, as I said, in my keynote today, I remember talking to some of our CIOs and customers 10 years ago and they said, we're going 90% of our apps tomorrow to one cloud. And we knew that wasn't practical because of course the clouds are built from Linux. So we knew it was underneath the hood and, and what's happened. It's taken some time, but as they started to get into that, they started to see, well, maybe one cloud's more suited for one application than the other, these apps. You may have to keep on premise, but you know, what really exploded at the, the, the hybrid thing, the edge. Now they're putting things at the edge, the GM announcement tell you, I know you're gonna talk to Francis. Yeah, yeah. Later. I mean, that's, that's a mini data center in, in every cloud, but that's still under the purview of the CIO, you know? So, so, so that's what hybrid's all about is tying all those pieces together, cuz it got more powerful, but it also more complex. >>You mentioned being the connective tissue, but we don't hear as much talk about multi-cloud seems to me, as we used to this conference has been all about hybrid cloud. You don't really talk about multi-cloud. How important is that to the red hat strategy, being that consistent layer? >>It's probably my mistake or our mistake because multi's more prevalent and more important than just hybrid alone. I mean, hybrid hybrid started from on-premise to one part to any one particular cloud. That was the, the first thought of hybrid. But as I said, as, as, as um, some of the cloud providers became so big, um, every, every CIO I talked to, whether they know whether they know it or not most do are in a multi environment for a whole bunch of reasons, right. You know, one cloud provider might be better in a different part of the world. And another one cloud provider might have a better service than another. Some just don't like to be stuck to one it's it's really hybrid multi. We should, we should train ourselves to every time we say hybrid, say multi, because that's really, that's really what it is. It, I think that happened overnight with, with Microsoft, you know, with Microsoft they've, they've, they've really grown over the last few years, so has Amazon for that matter. But Microsoft really coming up is what really made it a, a high, a multi world. >>Microsoft's remarkable what, what they're doing. But I, I, I have a different thinking on this. I, I heard Chuck Whitten last week at, at the Dell conference he used, he said used the phrase a multicloud, uh, by default versus multi-cloud by design. And I thought that was pretty interesting because I've said that multi-cloud is largely multi-vendor, you know? And so hybrid has implications, right? We, we bring and a shesh came up with a new term today. Metacloud I use Supercloud I like Metacloud better because something's happening, Paul. It feels like there's this layer abstraction layer that the underlying complexity is hidden. Think about OpenShift. Yeah. I could buy, I could get OpenShift for free. Yeah. I mean, I could, and I could cobble together and stitch together at 13, 15 dozens of different services and replicate, but I don't, I don't want that complexity. I want you to hide that complexity. I want, I'd rather spend money on your R and D than my engineering. So something's changing. It feels like >>You buy that. I totally buy that. I mean, you know, I, I, I'm gonna try to not make this sound like a marketing thing because it's not, not fair enough. Right. I mean, I'm engineer at heart, you know that, so, >>Okay. >>I really look to what we're trying to do is we're building a hybrid multi cloud. I mean that we, I look at us as a cloud provider spanning the hybrid multi all the way out to the edge world, but we don't have the data centers in the back. Like the cloud providers do in and by that is you're seeing our products being consumed more like cloud services because that's what our customers are demanding. Our, our products now can be bought out of the various marketplaces, et cetera. You're seeing different business models from us. So, uh, you're seeing, uh, committed spend, for example, like the cloud providers where a customer will buy so much up front and sort of just work it down. You're seeing different models on how they're consumed, consumption, based pricing. These, these are all things that came from the cloud providers and customers buying like that. >>They now want that across their entire environment. They don't wanna buy differently on premise or in one cloud and they don't wanna develop differently. They don't wanna operate differently. They don't wanna have to secure it differently. Security's the biggest thing with, with our, with our customers, because hybrid's powerful, but you no longer have the, you know, your security per perimeter, no longer the walls of your data center. You know, you're, you're responsible as a CIO. You're responsible for every app. Yeah. No matter where it's running, if that's the break in point, you're responsible for that. So that's why we've done things like, you know, we cried stack rocks. We've, we've built it into the container Kubernetes platform that spans those various footprints because you no longer can just do perimeter security because the perimeter is, is very, very, very large right now >>Diffuse. One of the thing on the multi-cloud hyper skills, I, I, red hat's never been defensive about public cloud. You, I think you look at the a hundred billion dollars a year in CapEx spend that's a gift to the industry. Not only the entire it industry, but, but the financial services companies and healthcare companies, they can build their own hybrid clouds. Metacloud super clouds taking advantage of that, but they still need that connective tissue. And that's where >>We products come in. We welcome our customers to go to, to the public cloud. Um, uh, look, it's it's. I said a long time ago, we said a long time it was gonna be a hybrid. Well, I should have said multi anybody said hybrid, then it's gonna be a hybrid world. It is. And it doesn't matter if it's a 20, 80, 80, 20, 40, 60, 60, 40. It's not gonna be a hundred percent anywhere. Yeah. And, and so in that, in that definition, it's a hybrid multi world. >>I wanna change the tune a little bit because I've been covering IBM for 40 years and seen a lot of acquisitions and see how they work. And usually it follows the same path. There's a commitment to leaving the acquire company alone. And then over time that fades, the company just becomes absorbed. Same thing with red hat. It seems like they're very much committed to, to, to leaving you alone. At least they said that upon the acquisition, have they followed through on that promise? >>I have to tell you IBM has followed through on every commitment they've made, made to us. I mean, I, I owe it, I owe a lot of it to Arvin. Um, he was the architect of the deal, right. Um, we've known each other for a long time. Um, he's a great guy. Um, he, uh, he, he believes in it. It's not, he's not just doing it that way because he thinks, um, something bad will happen if he doesn't, he's doing it that way. Cuz he believes in that our ecosystem is what made us. I mean, I mean, even here it's about the partners in the ecosystem. If you look at what made REL people think what made red hat as a company was support, right. Support's really important. Small piece of the value proposition life cycle supports certainly their life cycle a 10 year life cycle just came out of a, a, a customer conference asking about the life cycle and could we extend it to 15 years? You know? Um, the ecosystem is probably the most important part of, of, of, of the, of the overall value proposition. And Arvin knows in IBM knows that, you know, we have to be neutral to be able to do everything the same for all of our ecosystem partners. Some that are IBM's competitors, even. So, >>So we were noticing this morning, I mean, aside from a brief mention of power PC and the IBM logo during, at one point, there was no mention of IBM during the keynote sessions this morning. Is that intentional? Or is that just >>No, no, it it's, it's not intentional. I mean, I think that's part of, we have our strategy to drive and we're, we're driving our, our strategy. We, we, we IBM great partner. We look at them as a partner just as we do our, our many other partners and we won't, you know, we wouldn't, we wouldn't do something with our products, um, for I with IBM that we wouldn't offer to our, our entire ecosystem. >>But there is a difference now, right? I don't know these numbers. Exactly. You would know though, but, but pre 2019 acquisition red hat was just, I think north of 3 billion in revenue growing at maybe 12% a year. Something like that, AR I mean, we hear on the earnings calls, 21% growth. I think he's publicly said you're north of 5 billion or now I don't know how much of that consulting gets thrown in. IBM likes to, you know, IBM math, but still it's a much bigger business. And, and I wonder if you could share with us, obviously you can't dig into the numbers, but have you hired more people? I would imagine. I mean, sure. Like what's been different from that standpoint in terms of the accelerant to your >>Business. Yeah. We've been on the same hiring cycle percentage wise as, as we, we always were. I mean, I think the best way to characterize the relationship and where they've helped is, um, Arvin, Arvin will say, IBM can be opinionated on red hat, but not the other way around <laugh>. So, so what that, what that means is they had a lot of, they had, they had a container based Linux platform. Yeah, right, right. They, they had all their, they were their way of moving to the cloud was that when we came in, they actually stopped that. And they standardized on OpenShift across all of their products. We're now the vehicle that brings the blue software products to the hybrid cloud. We are that vehicle that does it. So I think that's, that's how, that's how they, they look about it. I mean, I know, I mean in IBM consulting, I know, I know they have a great relationship with Microsoft of course. >>Right. And so, so that's, that's how to really look at it. They they're opinionated on us where we not the other way around, but that, but they're a great partner. And even if we're at two separate companies, we'd do be doing all the same things we're doing with them. Now, what they do do for us can do for us is they open a lot of doors in many cases. I mean, IBM's been around for over a hundred years. So in many cases, they're in, in, in the C-suite, we, we may be in the C suite, but we may be one layer down, one, two layers down or something. They, they can, they help us get access. And I think that's been a, a part of the growth as well as is them talking into their, into, into their >>Constituents. Their consulting's one of the FA if not the fastest growing part of their business. So that's kind of the tip of the spear for application modernization, but enough on IBM you said something in your keynote. That was really interesting to me. You said, you, you, you didn't use the word hardware Renaissance, but that my interpretation was you're expecting the next, you know, several years to be a hardware Renaissance. We, we certainly have done relationships with arm. You mentioned Nvidia and Intel. Of course, you've had relationships with Intel for a long time. And we're seeing just the spate of new hardware developments, you know, does hardware matter? I'll ask you, >>Oh, oh, I mean the edge, as I said, you're gonna see hardware innovation out in the edge, software innovation as well. You know, the interesting part about the edge is that, you know, obviously remade red hat. What we did with REL was we did a lot of engineering work to make every hardware architecture when, when it was, when, when the world was just standalone servers, we made every hardware architecture just work out of the box. Right? And we did that in such, because with an open source development model. So embedded in our psyche, in our development processes is working upstream, bringing it downstream 10 years, support all of that kind of thing. So we lit up all that hardware. Now we go out to the edge, it's a whole new, different set of hardware innovation out at the edge. We know how to do that. >>We know how to, we know how to make hardware, innovation safe for the customer. And so we're bringing full circle and you have containers embedded in, in Linux and REL right now as well. So we're actually with the edge, bringing it all full circle back to what we've been doing for 20 plus years. Um, on, on the hardware side, even as a big part of the world, goes to containers and hybrid in, in multi-cloud. So that's why we're so excited about, about, about the edge, you know, opportunity here. That's, that's a big part of where hybrid's going. >>And when you guys talk about edge, I mean, I, I know a lot of companies will talk about edge in the context of your retail location. Okay. That's fine. That's cool. That's edge or telco that that's edge. But when you talk about, um, an in vehicle operating system, right. You know, that's to me the far edge, and that's where it gets really interesting, massive volumes, different architectures, both hardware and software. And a lot of the data may stay. Maybe it doesn't even get persisted. May maybe some comes back to the club, but that's a new >>Ballgame. Well, think about it, right? I mean, you, if you listen, I think you, right. My talk this morning, how many changes are made in the Linux kernel? Right? You're running in a car now, right? From a safety perspective. You wanna update that? I mean, look, Francis talked about it. You'll talk to Francis later as well. I mean, you know, how many, how many in, in your iPhone world Francis talked about this this morning, you know, they can, they can bring you a whole new world with software updates, the same in the car, but you have to do it in such a way that you still stay with the safety protocols. You're able to back things out, things like that. So it's open source, but getting raw upstream, open source and managing itself yourself, I just, I'm sorry. It takes a lot of experience to be able to be able to do those kinds of things. So it's secure, that's insecure. And that's what that's, what's exciting about it. You look at E the telco world look where the telco world came from in the telco world. It was a hardware stack from the hardware firmware operating system, every service, whether it was 9 1, 1 or 4, 1, 1 was its own stack. Yep. In the 4g, 3g, >>4g >>Virtualized. Now, now it's all software. Yeah. Now it's all software all the way out to the cell tower. So now, so, so now you see vendors out there, right? As an application, as a container based application, running out, running in the base of a cell tower, >>Cell tower is gonna be a little mini data >>Center. Yeah, exactly. Because we're in our time here asking quickly, because you've been at red hat a long time. You, you, you, uh, architected a lot of the reason they're successful is, is your responsibility. A lot of companies have tried to duplicate the red hat model, the, the service and support model. Nobody has succeeded. Do you think anybody ever will or will red hat continue to be a unicorn in that respect? >>No, I, I, I think, I think it will. I think open source is making it into all different parts of technology. Now I have to tell you the, the reason why we were able to do it is we stayed. We stayed true to our roots. We made a decision a long time ago that we weren't gonna put a line, say everything below the line was open and above the line was closed. Sometimes it's hard sometimes to get a differentiation with the competition, it can be hard, but we've stayed true to that. And I, to this day, I think that's the thing that's made us is never a confusion on if it's open or not. So that forces us to build our business models around that as well. But >>Do you have a differentiated strategy? Talk about that. What's your what's your differentiation >>Are, are, well, I mean, with the cloud, a differentiation is that common cloud platform across I differentiate strategy from an open source perspective is to, to sort make open source consumable. And, and it's even more important now because as Linux Linux is the base of everything, there's not enough skills out there. So even, even a container platform like open source op like OpenShift, could you build your own? Certainly. Could you keep it updated? Could you keep it updated without breaking all the applications on top? Do you have an ecosystem around it? It's all of those things. It was, it was the support, the, the, the hardening the 10 year to predictability the ecosystem. That was, that was, that is the secret. I mean, we even put the secret out as open. >>Yeah, <laugh> right. Free, like a puppy, as they say. All right, Paul, thanks so much for coming back in the cubes. Great to see you face to face. Nice to see you guys get it. All right. Keep it right there. Dave Valante for Paul Gill, you're watching the cubes coverage of red hat summit, 2022 from Boston. Be right back.

Published Date : May 10 2022

SUMMARY :

getting struggles, but you know, we have good distractions. The new normal, sometimes we call it the new abnormal <laugh>, uh, but you know, how do you feel? First of all, you know, combination today, virtual audience in, You know, we were talking about, you know, hybrid cloud. You may have to keep on premise, but you know, You mentioned being the connective tissue, but we don't hear as much talk about multi-cloud seems to me, with Microsoft, you know, with Microsoft they've, they've, they've really grown I want you to hide that complexity. I mean, you know, I, I, I'm gonna try to not make this sound like I really look to what we're trying to do is we're building a hybrid multi cloud. you know, your security per perimeter, no longer the walls of your data center. You, I think you look at the a hundred billion dollars a year in CapEx I said a long time ago, to, to leaving you alone. I have to tell you IBM has followed through on every commitment they've made, made to us. So we were noticing this morning, I mean, aside from a brief mention of power PC and the IBM and we won't, you know, we wouldn't, we wouldn't do something with our products, um, IBM likes to, you know, IBM math, but still it's a brings the blue software products to the hybrid cloud. And I think that's been a, So that's kind of the tip of the spear You know, the interesting part about the edge is that, about the edge, you know, opportunity here. And a lot of the data may stay. I mean, you know, how many, So now, so, so now you see vendors out there, right? Do you think anybody ever will or will red hat continue to be a unicorn in Now I have to tell you the, the reason why we were able to do it is we stayed. Do you have a differentiated strategy? I mean, we even put the secret out as open. Great to see you face to face.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Dave ValantePERSON

0.99+

Red SoxORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ArvinPERSON

0.99+

NvidiaORGANIZATION

0.99+

red SoxORGANIZATION

0.99+

FrancisPERSON

0.99+

90%QUANTITY

0.99+

Paul GillPERSON

0.99+

PaulPERSON

0.99+

AmazonORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

10 yearQUANTITY

0.99+

Paul CormierPERSON

0.99+

firstQUANTITY

0.99+

last weekDATE

0.99+

Chuck WhittenPERSON

0.99+

BostonLOCATION

0.99+

20 plus yearsQUANTITY

0.99+

21%QUANTITY

0.99+

IntelORGANIZATION

0.99+

13QUANTITY

0.99+

LinuxTITLE

0.99+

Boston, MassachusettsLOCATION

0.99+

two separate companiesQUANTITY

0.99+

OpenShiftTITLE

0.99+

10 yearsQUANTITY

0.99+

two layersQUANTITY

0.99+

todayDATE

0.99+

one layerQUANTITY

0.98+

RELTITLE

0.98+

this yearDATE

0.98+

oneQUANTITY

0.98+

Paul goerPERSON

0.98+

CapExORGANIZATION

0.98+

Red HatORGANIZATION

0.98+

last nightDATE

0.98+

10 years agoDATE

0.98+

bothQUANTITY

0.98+

CelticsORGANIZATION

0.98+

one partQUANTITY

0.97+

20QUANTITY

0.97+

tomorrowDATE

0.97+

one applicationQUANTITY

0.96+

iPhoneCOMMERCIAL_ITEM

0.96+

telcoORGANIZATION

0.95+

12% a yearQUANTITY

0.95+

over a hundred yearsQUANTITY

0.94+

Linux kernelTITLE

0.93+

one cloudQUANTITY

0.93+

RELORGANIZATION

0.93+

hundred percentQUANTITY

0.93+

this morningDATE

0.91+

red hat summitEVENT

0.91+

tonightDATE

0.9+

Luke Hinds, Red Hat | KubeCon + CloudNativeCon NA 2021


 

>>Welcome to this cube conversation. I'm Dave Nicholson and we're having this conversation in advance of cube con cloud native con north America, 2021. Uh, we are going to be talking specifically about a subject near and dear to my heart, and that is security. We have a very special guest from red hat, the security lead from the office of the CTO. New kinds. Welcome. Welcome to the cube Luke. >>Oh, it's great to be here. Thank you, David. Really looking forward to this conversation. >>So you have a session, uh, at a CubeCon slash cloud native con this year. And, uh, frankly, I look at the title and based on everything that's going on in the world today, I'm going to accuse you of clickbait because the title of your session is a secure supply chain vision. Sure. What other than supply chain has is in the news today, all of these things going on, but you're talking about the software supply chain. Aren't you tell, tell us about, tell us about this vision, where it came from Phyllis in. >>Yes, very much. So I do agree. It is a bit of a buzzword at the moment, and there is a lot of attention. It is the hot topic, secure supply chains, thanks to things such as the executive order. And we're starting to see an increase in attacks as well. So there's a recent statistic came out that was 620%. I believe increase since last year of supply chain attacks involving the open source ecosystem. So things are certainly ramping up. And so there is a bit of clickbait. You got me there. And um, so supply chains, um, so it's predominantly let's consider what is a supply chain. Okay. And we'll, we'll do this within the context of cloud native technology. Okay. Cause there's many supply chains, you know, many, many different software supply chains. But if we look at a cloud native one predominantly it's a mix of people and machines. >>Okay. So you'll have your developers, uh, they will then write code. They will change code and they'll typically use our, a code revision control system, like get, okay, so they'll make their changes there. Then push those changes up to some sort of repository, typically a get Harbor or get level, something like that. Then another human will then engage and they will review the code. So somebody that's perhaps a maintain will look at the code and they'll improve that a code. And then at the same time, the machine start to get involved. So you have your build servers that run tests and integration tests and they check the code is linted correctly. Okay. And then you have this sort of chain of events that start to happen. These machines, these various actors that start to play their parts in the chain. Okay. So your build system might generate a container image is a very common thing within a cloud native supply chain. >>Okay. And then that image is typically deployed to production or it's hosted on a registry, a container registry, and then somebody else might utilize that container image because it has software that you've packaged within that container. Okay. And then this sort of prolific expansion of use of coasts where people start to rely on other software projects for their own dependencies within their code. Okay. And you've got this kind of a big spaghetti of actors that are dependent on each other and feed him from each other. Okay. And then eventually that is deployed into production. Okay. So these machines are a lot of them non open source code. Okay. Even if there is a commercial vendor that manages that as a service, it's all based on predominantly open source code. Okay. And the security aspects with the supply chain is there's many junctures where you can exploit that supply chain. >>So you can exploit the human, or you could be a net ferrous human in the first place you could steal somebody's identity. Okay. And then there's the build systems themselves where they generate these artifacts and they run jobs. Okay. And then there are the production system, which pulls these down. Okay. And then there's the element of which we touched upon around libraries and dependencies. So if you look at a lot of projects, they will have approximately around a hundred, perhaps 500 dependencies that they all pull in from. Okay. So then you have the supply chains within each one of those, they've got their own set of humans and machines. And so it's a very large spaghetti beast of, of, of sort of dependence and actors and various identities that make up. >>Yeah. You're, you're describing a nightmarish, uh, scenario here. So, uh, so, so I definitely appreciate the setup there. It's a chain of custody nightmare. Yeah. >>Yes. Yeah. But it's also a wonderful thing because it's allowed us to develop in the paradigms that we have now very fast, you know, you can, you can, you can prototype and design and build and ship very fast, thanks to these tools. So they're wonderful. It's not to say that they're, you know, that there is a gift there, but security has arguably been left as a bit of an afterthought essentially. Okay. So security is always trying to it's at the back of the race. It's always trying to catch up with you. See what I mean? So >>Well, so is there a specific reason why this is particularly timely? Um, in, you know, when we, when we talk about deployment of cloud native applications, uh, something like 75% of what we think of is it is still on premesis, but definitely moving in the direction of what we loosely call cloud. Um, is why is this particularly timely? >>I think really because of the rampant adoption that we see. So, I mean, as you rightly say, a lot of, uh, it companies are still running on a, sort of a, more of a legacy model okay. Where deployments are more monolithic and statics. I mean, we've both been around for a while when we started, you would, you know, somebody would rack a server, they plug a network cable and you'd spend a week deploying the app, getting it to run, and then you'd walk away and leave it to a degree. Whereas now obviously that's really been turned on its head. So there is a, an element of not everybody has adopted this new paradigm that we have in development, but it is increasing, there is rapid adoption here. And, and many that aren't many that rather haven't made that change yet to, to migrate to a sort of a cloud type infrastructure. >>They certainly intend to, well, they certainly wished to, I mean, there's challenges there in itself, but it, I would say it's a safe bet to say that the prolific use of cloud technologies is certainly increasing as we see in all the time. So that also means the attack vectors are increasing as we're starting to see different verticals come into this landscape that we have. So it's not just your kind of a sort of web developer that are running some sort of web two.site. We have telcos that are starting to utilize cloud technology with virtual network functions. Uh, we have, um, health banking, FinTech, all of these sort of large verticals are starting to come into cloud and to utilize the cloud infrastructure model that that can save them money, you know, and it can make them, can make their develop more agile and, you know, there's many benefits. So I guess that's the main thing is really, there's a convergence of industries coming into this space, which is starting to increase the security risks as well. Because I mean, the security risks to a telco are a very different group to somebody that's developing a web platform, for example. >>Yeah. Yeah. Now you, you, uh, you mentioned, um, the sort of obvious perspective from the open source perspective, which is that a lot of this code is open source code. Um, and then I also, I assume that it makes a lot of sense for the open source community to attack this problem, because you're talking about so many things in that chain of custody that you described where one individual private enterprise is not likely to be able to come up with something that handles all of it. So, so what's your, what's your vision for how we address this issue? I know I've seen in, um, uh, some of the content that you've produced an allusion to this idea that it's very similar to the concept of a secure HTTP. And, uh, and so, you know, imagine a world where HTTP is not secure at any time. It's something we can't imagine yet. We're living in this parallel world where, where code, which is one of the four CS and cloud security, uh, isn't secure. So what do we do about that? And, and, and as you share that with us, I want to dive in as much as we can on six store explain exactly what that is and, uh, how you came up with this. >>Yes, yes. So, so the HTTP story's incredibly apt for where we are. So around the open source ecosystem. Okay. We are at the HTTP stage. Okay. So a majority of code is pulled in on trusted. I'm not talking about so much here, somebody like a red hat or, or a large sort of distributor that has their own sign-in infrastructure, but more sort of in the, kind of the wide open source ecosystem. Okay. The, um, amount of code that's pulled in on tested is it's the majority. Okay. So, so it is like going to a website, which is HTTP. Okay. And we sort of use this as a vision related to six store and other projects that are operating in this space where what happened effectively was it was very common for sites to run on HTTP. So even the likes of Amazon and some of the e-commerce giants, they used to run on HTTP. >>Okay. And obviously they were some of the first to, to, uh, deploy TLS and to utilize TLS, but many sites got left behind. Okay. Because it was cumbersome to get the TLS certificate. I remember doing this myself, you would have to sort of, you'd have to generate some keys, the certificate signing request, you'd have to work out how to run open SSL. Okay. You would then go to an, uh, a commercial entity and you'd probably have to scan your passport and send it to them. And there'll be this kind of back and forth. Then you'll have to learn how to configure it on your machine. And it was cumbersome. Okay. So a majority just didn't bother. They just, you know, they continue to run their, their websites on protected. What effectively happened was let's encrypt came along. Okay. And they disrupted that whole paradigm okay. >>Where they made it free and easy to generate, procure, and set up TLS certificates. So what happened then was there was a, a very large change that the kind of the zeitgeists changed around TLS and the expectations of TLS. So it became common that most sites would run HTTPS. So that allowed the browsers to sort of ring fence effectively and start to have controls where if you're not running HTTPS, as it stands today, as it is today is kind of socially unacceptable to run a site on HTTP is a bit kind of, if you go to HTTP site, it feels a bit, yeah. You know, it's kind of, am I going to catch a virus here? It's kind of, it's not accepted anymore, you know, and, and it needed that disruptor to make that happen. So we want to kind of replicate that sort of change and movement and perception around software signing where a lot of software and code is, is not signed. And the reason it's not signed is because of the tools. It's the same story. Again, they're incredibly cumbersome to use. And the adoption is very poor as well. >>So SIG stores specifically, where did this, where did this come from? And, uh, and, uh, what's your vision for the future with six? >>Sure. So six door, six doors, a lockdown project. Okay. It started last year, July, 2020 approximately. And, uh, a few people have been looking at secure supply chain. Okay. Around that time, we really started to look at it. So there was various people looking at this. So it's been speaking to people, um, various people at Purdue university in Google and, and other, other sort of people trying to address this space. And I'd had this idea kicking around for quite a while about a transparency log. Okay. Now transparency logs are actually, we're going back to HTTPS again. They're heavily utilized there. Okay. So when somebody signs a HTTPS certificate as a root CA, that's captured in this thing called a transparency log. Okay. And a transparency log is effectively what we call an immutable tamper proof ledger. Okay. So it's, it's kind of like a blockchain, but it's different. >>Okay. And I had this idea of what, if we could leverage this technology okay. For secure supply chain so that we could capture the provenance of code and artifacts and containers, all of these actions, these actors that I described at the beginning in the supply chain, could we utilize that to provide a tamper resistant publicly or DePaul record of the supply chain? Okay. So I worked on a prototype wherever, uh, you know, some, uh, a week or two and got something basic happening. And it was a kind of a typical open source story there. So I wouldn't feel right to take all of the glory here. It was a bit like, kind of, you look at Linux when he created a Linux itself, Linus, Torvalds, he had an idea and he shared it out and then others started to jump in and collaborate. So it's a similar thing. >>I, um, shared it with an engineer from Google's open source security team called Dan Lawrence. Somebody that I know of been prolific in this space as well. And he said, I'd love to contribute to this, you know, so can I work this? And I was like, yeah, sure though, you know, the, the more, the better. And then there was also Santiago professor from Purdue university took an interest. So a small group of people started to work on this technology. So we built this project that's called Rico, and that was effectively the transparency log. So we started to approach projects to see if they would like to, to utilize this technology. Okay. And then we realized there was another problem. Okay. Which was, we now have a storage for signed artifacts. Okay. A signed record, a Providence record, but nobody's signing anything. So how are we going to get people to sign things so that we can then leverage this transparency log to fulfill its purpose of providing a public record? >>So then we had to look at the signing tools. Okay. So that's where we came up with this really sort of clever technology where we've managed to create something called ephemeral keys. Okay. So we're talking about a cryptographic key pair here. Okay. And what we could do we found was that we could utilize other technologies so that somebody wouldn't have to manage the private key and they could generate keys almost point and click. So it was an incredibly simple user experience. So then we realized, okay, now we've got an approach for getting people to sign things. And we've also got this immutable, publicly audited for record of people signing code and containers and artifacts. And that was the birth of six store. Then. So six store was created as this umbrella project of all of these different tools that were catering towards adoption of signing. And then being able to provide guarantees and protections by having this transparency log, this sort of blockchain type technology. So that was where we really sort of hit the killer application there. And things started to really lift off. And the adoption started to really gather steam then. >>So where are we now? And where does this go into the future? One of the, one of the wonderful things about the open source community is there's a sense of freedom in the creativity of coming up with a vision and then collaborating with others. Eventually you run headlong into expectations. So look, is this going to be available for purchase in Q1? What's the, >>Yeah, I, I will, uh, I will fill you in there. Okay. So, so with six door there's, um, there's several different models that are at play. Okay. I'll give you the, the two predominant ones. So one, we plan, we plan to run a public service. Okay. So this will be under the Linux foundation and it'll be very similar to let's encrypt. So you as a developer, if you want to sign your container, okay. And you want to use six door tooling that will be available to you. There'll be non-profit three to use. There's no specialties for anybody. It's, it's there for everybody to use. Okay. And that's to get everybody doing the right thing in signing things. Okay. The, the other model for six stories, this can be run behind a firewall as well. So an enterprise can stand up their own six store infrastructure. >>Okay. So the transparency log or code signing certificates, system, client tools, and then they can sign their own artifacts and secure, better materials, all of these sorts of things and have their own tamper-proof record of everything that's happened. So that if anything, untoward happens such as a key compromise or somebody's identity stolen, then you've got a credible source of truth because you've got that immutable record then. So we're seeing, um, adoption around both models. We've seen a lot of open source projects starting to utilize six store. So predominantly key, um, Kubernetes is a key one to mention here they are now using six store to sign and verify their release images. Okay. And, uh, there's many other open-source projects that are looking to leverage this as well. Okay. And then at the same time, various people are starting to consider six door as being a, sort of an enterprise signing solution. So within red hat, our expectations are that we're going to leverage this in open shift. So open shift customers who wish to sign their images. Okay. Uh, they want to sign their conflicts that they're using to deploy within Kubernetes and OpenShift. Rather they can start to leverage this technology as open shift customers. So we're looking to help the open source ecosystem here and also dog food, this, and make it available and useful to our own customers at red hat. >>Fantastic. You know, um, I noticed the red hat in the background and, uh, and, uh, you know, I just a little little historical note, um, red hat has been there from the beginning of cloud before, before cloud was cloud before there was anything credible from an enterprise perspective in cloud. Uh, I, I remember in the early two thousands, uh, doing work with tree AWS and, uh, there was a team of red hat folks who would work through the night to do kernel level changes for the, you know, for the Linux that was being used at the time. Uh, and so a lot of, a lot of what you and your collaborators do often falls into the category of, uh, toiling in obscurity, uh, to a certain degree. Uh, we hope to shine light on the amazing work that you're doing. And, um, and I, for one appreciate it, uh, I've uh, I've, I've suffered things like identity theft and, you know, we've all had brushes with experiences where compromise insecurity is not a good thing. So, um, this has been a very interesting conversation. And again, X for the work that you do, uh, do you have any other, do you have any other final thoughts or, or, uh, you know, points that we didn't cover on this subject that come to mind, >>There is something that you touched upon that I'd like to illustrate. Okay. You mentioned that, you know, identity theft and these things, well, the supply chain, this is critical infrastructure. Okay. So I like to think of this as you know, there's, sir, they're serving, you know, they're solving technical challenges and, you know, and the kind of that aspect of software development, but with the supply chain, we rely on these systems. When we wake up each morning, we rely on them to stay in touch with our loved ones. You know, we are our emergency services, our military, our police force, they rely on these supply chains, you know, so I sort of see this as there's a, there's a bigger vision here really in protecting the supply chain is, is for the good of our society, because, you know, a supply chain attack can go very much to the heart of our society. You know, it can, it can be an attack against our democracies. So I, you know, I see this as being something that's, there's a humanistic aspect to this as well. So that really gets me fired up to work on this technology., >>it's really important that we always keep that perspective. This isn't just about folks who will be attending CubeCon and, uh, uh, uh, cloud con uh, this is really something that's relevant to all of us. So, so with that, uh, fantastic conversation, Luke, it's been a pleasure to meet you. Pleasure to talk to you, David. I look forward to, uh, hanging out in person at some point, whatever that gets me. Uh, so with that, uh, we will sign off from this cube conversation in anticipation of cloud con cube con 2021, north America. I'm Dave Nicholson. Thanks for joining us.

Published Date : Oct 14 2021

SUMMARY :

Welcome to this cube conversation. Oh, it's great to be here. So you have a session, uh, at a CubeCon slash cloud So there's a recent statistic came out that was 620%. So you have your build servers that run tests and integration And the security aspects with the supply chain is there's many junctures So then you have the supply chains within each one of those, It's a chain of custody nightmare. in the paradigms that we have now very fast, you know, you can, you can, Um, in, you know, when we, when we talk about deployment of cloud native applications, So there is a, So that also means the I assume that it makes a lot of sense for the open source community to attack this problem, So around the open source ecosystem. I remember doing this myself, you would have to sort of, you'd have to generate some keys, So that allowed the browsers to sort So there was various people looking at this. uh, you know, some, uh, a week or two and got something basic happening. So a small group of people started to work on this technology. So that was where we really sort of hit So where are we now? So you as a developer, if you want to sign your container, okay. So that if anything, untoward happens such as And again, X for the work that you do, So I like to think of this as you know, it's really important that we always keep that perspective.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave NicholsonPERSON

0.99+

Luke HindsPERSON

0.99+

LukePERSON

0.99+

GoogleORGANIZATION

0.99+

75%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

620%QUANTITY

0.99+

Dan LawrencePERSON

0.99+

six storiesQUANTITY

0.99+

KubeConEVENT

0.99+

six doorsQUANTITY

0.99+

last yearDATE

0.99+

2021DATE

0.99+

CubeConEVENT

0.99+

a weekQUANTITY

0.99+

twoQUANTITY

0.99+

both modelsQUANTITY

0.98+

AWSORGANIZATION

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

six storeQUANTITY

0.98+

todayDATE

0.98+

500 dependenciesQUANTITY

0.98+

sixQUANTITY

0.98+

north AmericaLOCATION

0.98+

LinuxTITLE

0.98+

threeQUANTITY

0.97+

each morningQUANTITY

0.97+

cloud con cube con 2021EVENT

0.97+

this yearDATE

0.97+

six doorQUANTITY

0.97+

bothQUANTITY

0.97+

fourQUANTITY

0.97+

around a hundredQUANTITY

0.97+

OneQUANTITY

0.96+

last year, July, 2020DATE

0.95+

Q1DATE

0.94+

each oneQUANTITY

0.94+

RicoORGANIZATION

0.93+

Purdue universityORGANIZATION

0.93+

Red HatORGANIZATION

0.91+

one individualQUANTITY

0.91+

SIGORGANIZATION

0.91+

KubernetesORGANIZATION

0.91+

cloud conEVENT

0.89+

CTOORGANIZATION

0.88+

approximatelyQUANTITY

0.88+

CubeConORGANIZATION

0.86+

HTTPSTITLE

0.82+

red hatORGANIZATION

0.82+

two thousandsQUANTITY

0.8+

storeORGANIZATION

0.8+

CloudNativeCon NA 2021EVENT

0.8+

LinusORGANIZATION

0.77+

ProvidenceLOCATION

0.76+

red hatTITLE

0.74+

KubernetesTITLE

0.74+

six storeORGANIZATION

0.72+

cloud native conORGANIZATION

0.71+

SantiagoPERSON

0.69+

telcoORGANIZATION

0.67+

OpenShiftTITLE

0.65+

PhyllisORGANIZATION

0.62+

redORGANIZATION

0.59+

HTTPSOTHER

0.55+

TorvaldsPERSON

0.53+

kernelTITLE

0.5+

onesQUANTITY

0.48+

DePaulORGANIZATION

0.48+

hatORGANIZATION

0.47+

hatTITLE

0.41+

LIVE Panel: Container First Development: Now and In the Future


 

>>Hello, and welcome. Very excited to see everybody here. DockerCon is going fantastic. Everybody's uh, engaging in the chat. It's awesome to see. My name is Peter McKee. I'm the head of developer relations here at Docker and Taber. Today. We're going to be talking about container first development now and in the future. But before we do that, a couple little housekeeping items, first of all, yes, we are live. So if you're in our session, you can go ahead and chat, ask us questions. We'd love to get all your questions and answer them. Um, if you come to the main page on the website and you do not see the chat, go ahead and click on the blue button and that'll die. Uh, deep dive you into our session and you can interact with the chat there. Okay. Without further ado, let's just jump right into it. Katie, how are you? Welcome. Do you mind telling everybody who you are and a little bit about yourself? >>Absolutely. Hello everyone. My name is Katie and currently I am the eco-system advocate at cloud native computing foundation or CNCF. My responsibility is to lead and represent the end-user community. So these are all the practitioners within the cloud native space that are vendor neutral. So they use cloud native technologies to build their services, but they don't sell it. So this is quite an important characteristic as well. My responsibility is to make sure to close the gap between these practitioners and the project maintainers, to make sure that there is a feedback loop around. Um, I have many roles within the community. I am on the advisory board for KIPP finishes, a sandbox project. I'm working with open UK to make sure that Elton standards are used fairly across data, hardware, and software. And I have been, uh, affiliated way if you'd asked me to make sure that, um, I'm distributing a cloud native fundamental scores to make cloud and they do a few bigger despite everyone. So looking forward to this panel and checking with everyone. >>Awesome. Yeah. Welcome. Glad to have you here. Johanna's how are you? Can you, uh, tell everybody a little bit about yourself and who you are? Yeah, sure. >>So hi everybody. My name is Johannes I'm one of the co-founders at get pot, which in case you don't know is an open-source and container based development platform, which is probably also the reason why you Peter reached out and invited me here. So pleasure to be here, looking forward to the discussion. Um, yeah, though it is already a bit later in Munich. Um, and actually my girlfriend had a remote cocktail class with her colleagues tonight and it took me some stamina to really say no to all the Moscow mules that were prepared just over there in my living room. Oh wow. >>You're way better than me. Yeah. Well welcome. Thanks for joining us. Jerome. How are you? Good to see you. Can you tell everybody who you are and a little bit about yourself? Hi, >>Sure. Yeah, so I'm, I, I used to work at Docker and some, for me would say I'm a container hipster because I was running containers in production before it for hype. Um, I worked at Docker before it was even called Docker. And then since 2018, I'm now a freelancer and doing training and consulting around Docker containers, Kubernetes, all these things. So I used to help folks do stuff with Docker when I was there and now I still have them with containers more generally speaking. So kind of, uh, how do we say same, same team, different company or something like that? Yeah. >>Yeah. Perfect. Yeah. Good to see you. I'm glad you're on. Uh, Jacob, how are you? Good to see you. Thanks for joining us. Good. Yeah. Thanks for having me tell, tell everybody a little bit about yourself who you are. >>Yeah. So, uh, I'm the creator of a tool called mutagen, which is an open source, uh, development tool for doing high performance file synchronization and, uh, network forwarding, uh, to enable remote development. And so I come from like a physics background where I was sort of always doing, uh, remote developments, you know, whether that was on a big central clusters or just like some sort of local machine that was a bit more powerful. And so I, after I graduated, I built this tool called mutagen, uh, for doing remote development. And then to my surprise, people just started using it to use, uh, with Docker containers. And, uh, that's kind of grown into its primary use case now. So I'm, yeah, I've gotten really involved with the Docker community and, uh, talked with a lot of great people and now I'm one of the Docker captains. So I get to talk with even more and, and join these events and yeah, but I'm, I'm kind of focused on doing remote development. Uh, cause I, you know, I like, I like having all my tools available on my local machine, but I also like being able to pull in a little bit more powerful hardware or uh, you know, maybe a software that I can't run locally. And so, uh, that's sort of my interest in, in Docker container. Yeah. Awesome. >>Awesome. We're going to come back to that for sure. But yeah. Thank you again. I really appreciate you all joining me and yeah. So, um, I've been thinking about container first development for a while and you know, what does that actually mean? So maybe, maybe we can define it in our own little way. So I, I just throw it out to the panel. When you think about container first development, what comes to mind? What w what, what are you kind of thinking about? Don't be shy. Go ahead. Jerome. You're never a loss of words >>To me. Like if I go back to the, kind of the first, uh, you know, training engagements we did back at Docker and kind of helping folks, uh, writing Dockerfiles to stop developing in containers. Um, often we were replacing, um, uh, set up with a bunch of Vagrant boxes and another, like the VMs and combinations of local things. And very often they liked it a lot and they were very soon, they wanted to really like develop in containers, like run this microservice. This piece of code is whatever, like run that in containers because that means they didn't have to maintain that thing on their own machine. So that's like five years ago. That's what it meant to me back then. However, today, if you, if you say, okay, you know, developing in containers, um, I'm thinking of course about things like get bought and, uh, I think it's called PR or something like that. >>Like this theme, maybe that thing with the ESCO, that's going to run in a container. And you, you have this vs code thing running in your browser. Well, obviously not in your browser, but in a container that you control from your browser and, and many other things like that, that I, I think that's what we, where we want to go today. Uh, and that's really interesting, um, from all kinds of perspectives, like Chevy pair pairing when we will not next to each other, but actually thousands of miles away, um, or having this little environment that they can put aside and come back to it later, without it having using resource in my machine. Um, I don't know, having this dev service running somewhere in the cloud without needing something like, it's at the rights that are like the, the possibilities are really endless. >>Yeah. Yeah. Perfect. Yeah. I'm, you know, a little while ago I was, I was torn, right. W do I spin up containers? Do I develop inside of my containers? Right. There's foul sinking issues. Um, you know, that we've been working on at Docker for a while, and Jacob is very, very familiar with those, right? Sometimes it, it becomes hard, but, and I, and I love developing in the cloud, but I also have this screaming, you know, fast machine sitting on my desktop that I think I should take advantage of. So I guess another question is, you know, should we be developing inside of containers? Is that a smart thing to do? Uh, I'd love to hear you guys' thoughts around that. >>You know, I think it's one of those things where it's, you know, for me container first development is really about, um, considering containers as sort of a first class citizen in, in terms of your development toolkit, right. I mean, there's not always that silver bullet, that's like the one thing you should use for everything. You know, you shouldn't, you shouldn't use containers if they're not fitting in or adding value to your workflow, but I think there's a lot of scenarios that are like, you know, super on super early on in the development process. Like as soon as you get the server kind of running and working and, you know, you're able to access it, you know, running on your local system. Uh that's I think that's when the value comes in to it to add containers to, you know, what you're doing or to your project. Right. I mean, for me, they're, um, they're more of a orchestrational tool, right? So if I don't have to have six different browser tabs open with like, you know, an API server running at one tab and a web server running in another tab and a database running in another tab, I can just kind of encapsulate those and, and use them as an automation thing. So I think, you know, even if you have a super powerful computer, I think there's still value in, um, using containers as, as a orchestrational mechanism. Yeah. Yeah, >>For sure. I think, I think one of the, one of my original aha moments with Docker was, oh, I can spin up different versions of a database locally and not have to install it and not have to configure it and everything, but, you know, it just ran inside of a container. And that, that was it. Although it's might seem simple to some people that's very, very powerful. Right. So I think being able to spin things up and containers very quickly is one of the super benefits. But yeah, I think, uh, developing in containers is, is hard right now, right. With, um, you know, and how do you do that? Right. Does anybody have any thoughts around, how do you go about that? Right. Should you use a container as just a development environment, so, you know, creating an image and then running it just with your dev tools in it, or do you just, uh, and maybe with an editor all inside of it, and it's just this process, that's almost like a VM. Um, yeah. So I'll just kick it back to the panel. I'd love to hear your thoughts on, you know, how do you set up and configure, uh, containers to develop in any thoughts around that? >>Maybe one step back again, to answer your question, what kind of container first development mean? I think it doesn't mean, um, by default that it has to be in the cloud, right? As you said, um, there are obvious benefits when it comes to the developer experience of containers, such as, I dunno, consistency, we have standardized tools dependencies for the dev side of things, but it also makes their dev environment more similar to all the pipeline that is somehow happening to the right, right. So CIC D all the way to production, it is security, right? Which also somehow comes with standardization. Um, but vulnerability scanning tools like sneak are doing a great job there. And, um, for us, it gets pod. One of the key reasons why we created get pod was literally creating this peace of mind for deaths. So from a developer's point of view, you do not need to take care anymore about all the hassle around setups and things that you will need to install. >>And locally, based on some outdated, REIT me on three operating systems in your company, everybody has something different and leading to these verbs in my machine situations, um, that really slow professional software developers down. Right. Um, back to your point, I mean, with good pod, we obviously have to package everything together in one container because otherwise, exactly the situation happens that you need to have five browser tabs open. So we try and leverage that. And I think a dev environment is not just the editor, right? So a dev environment includes your source code. It includes like a powerful shell. It includes file systems. It includes essentially all the tools you need in order to be productive databases and so on. And, um, yeah, we believe that should be encapsulated, um, um, in a container. >>Yeah. Awesome. Katie, you talked to a lot of end users, right. And you're talking to a lot of developers. What, what's your thoughts around container first development, right? Or, or what's the community out there screaming or screaming. It might be too to, uh, har you know, to, to two grand of the word. Right. But yeah, I love it. I love to hear what your, your thoughts. >>Absolutely. So I think when you're talking about continuing driven development, uh, the first thing that crosses my mind is the awareness of the infrastructure or the platform you're going to run your application on top of, because usually when you develop your application, you'd like to replicate as much as possible the production or even the staging environment to make sure that when you deploy your application, you have us little inconsistencies as possible, but at the same time, you minimize the risk for something to go wrong as well. So when it talking about the, the community, um, again, when you deploy applications and containers and Kubernetes, you have to use, you have awareness about, and probably apply some of the best practices, like introducing liveliness and readiness probes, to make sure that your application can restart in, in case it actually goes down or there's like a you're starving going CPU or something like that. >>So, uh, I think when it comes to deployment and development of an application, the main thing is to actually improve the end developer experience. I think there has been a lot of focus in the community to develop the tool, to actually give you the right tool to run application and production, but that doesn't necessarily, um, go back to how the end developer is actually enabling that application to run into that production system. So I think there has been, uh, this focus for the community identified now, and it's more, more, um, or trying to build momentum on enhancing the developer experience. And we've seen this going through many, uh, where we think production of many tools did what has been one of them, which actually we can have this portable, um, development environment if you choose so, and you can actually replicate them across different teams in different machines, which is actually quite handy. >>But at the same time, we had tools such as local composts has been a great tool to run locally. We have tool such as carefully, which is absolutely great to automatically dynamically upload any changes to how within your code. So I think all of these kinds of tools, they getting more matured. And again, this is going back to again, we need to enhance our developer experience coming back to what is the right way to do so. Um, I think it really depends on the environment you have in production, because there's going to define some of the structures with the tool and you're going to have internally, but at the same time, um, I'd like to say that, uh, it really depends on, on what trucks are developing. Uh, so it's, it's, I would like to personally, I would like to see a bit more diversification in this area because we might have this competitive solutions that is going to push us to towards a new edge. So this is like, what definitely developer experience. If we're talking about development, that's what we need to enhance. And that's what I see the momentum building at the moment. >>Yeah. Yeah. Awesome. Jerome, I saw you shaking your head there in agreement, or maybe not, but what's your thoughts? >>I was, uh, I was just reacting until 82. Uh, it depends thinking that when I, when I do training, that's probably the answer that I gave the most, uh, each time somebody asks, oh, should we do diesel? And I was also looking at some of the questions in the chat about, Hey, the, should we like have a negatory in the, in the container or something like that. And folks can have pretty strong opinions one way or the other, but as a ways, it kind of depends what we do. It also depends of the team that we're working with. Um, you, you could have teams, you know, with like small teams with folks with lots of experience and they all come with their own Feb tools and editorials and plugins. So you know that like you're gonna have PRI iMacs out of my cold dead hands or something like that. >>So of course, if you give them something else, they're going to be extremely unhappy or sad. On the other hand, you can have team with folks who, um, will be less opinionated on that. And even, I don't know, let's say suddenly you start working on some project with maybe a new programming language, or maybe you're targeting some embedded system or whatever, like something really new and different. And you come up with all the tools, even the ADE, the extensions, et cetera, folks will often be extremely happy in that case that you're kind of giving them a Dettol and an ADE, even if that's not what they usually would, uh, would use, um, because it will come with all of the, the, the nice stage, you know, the compression, the, um, the, the, the bigger, the, whatever, all these things. And I think there is also something interesting to do here with development in containers. >>Like, Hey, you're going to start working on this extremely complex target based on whatever. And this is a container that has everything to get started. Okay. Maybe it's not your favorites editor, but it has all the customization and the conserver and whatever. Um, so you can start working right away. And then maybe later you, we want to, you know, do that from the container in a way, and have your own Emacs, atom, sublime, vs code, et cetera, et cetera. Um, but I think it's great for containers here, as well as they reserve or particularly the opportunity. And I think like the, that, that's one thing where I see stuff like get blood being potentially super interesting. Um, it's hard for me to gauge because I confess I was never a huge ID kind of person had some time that gives me this weird feeling, like when I help someone to book some, some code and you know, that like with their super nice IDE and everything is set up, but they feel kind of lost. >>And then at some point I'm like, okay, let's, let's get VI and grep and let's navigate this code base. And that makes me feel a little bit, you know, as this kind of old code for movies where you have the old, like colorful guy who knows going food, but at the end ends up still being obsolete because, um, it's only a going for movies that whole good for masters and the winning right. In real life, we don't have conformance there's anymore mentioned. So, um, but part of me is like, yeah, I like having my old style of editor, but when, when the modern editorial modern ID comes with everything set up and configured, that's just awesome. That's I, um, it's one thing that I'm not very good at sitting up all these little things, but when somebody does it and I can use it, it's, it's just amazing. >>Yeah. Yeah. I agree. I'm I feel the same way too. Right. I like, I like the way I've I have my environment. I like the tools that I use. I like the way they're set up. And, but it's a big issue, right? If you're switching machines, like you said, if you're helping someone else out there, they're not there, your key bindings aren't there, you can't, you can't navigate their system. Right? Yeah. So I think, you know, talking about, uh, dev environments that, that Docker's coming out with, and we're, you know, there's a lot, there, there's a, it's super complex, all these things we're talking about. And I think we're taking the approach of let's do something, uh, well, first, right. And then we can add on to that. Right. Because I think, you know, setting up full, full developed environments is hard, right. Especially in the, the, um, cloud native world nowadays with microservices, do you run them on a repo? >>Do you not have a monitor repo? Maybe that would be interesting to talk about. I think, um, you know, I always start out with the mono repos, right. And you have all your services in there and maybe you're using one Docker file. And then, because that works fine. Cause everything is JavaScript and node. And then you throw a little Python in there and then you throw a little go and now you start breaking things out and then things get too complex there, you know, and you start pulling everything out into different, get repos and now, right. Not everything just fits into these little buckets. Right. So how do you guys think maybe moving forward, how do we attack that night? How do we attack these? Does separate programming languages and environments and kind of bring them all together. You know, we, we, I hesitate, we solve that with compose around about running, right about executing, uh, running your, your containers. But, uh, developing with containers is different than running containers. Right. It's a, it's a different way to think about it. So anyway, sorry, I'm rattling on a little bit, but yeah. Be interesting to look at a more complex, uh, setup right. Of, uh, of, you know, even just 10 microservices that are in different get repos and different languages. Right. Just some thoughts. And, um, I'm not sure we all have this flushed out yet, but I'd love to hear your, your, you guys' thoughts around that. >>Jacob, you, you, you, you look like you're getting ready to jump there. >>I didn't wanna interrupt, but, uh, I mean, I think for me the issue isn't even really like the language boundary or, or, um, you know, a sub repo boundary. I think it's really about, you know, the infrastructure, right? Because you have, you're moving to an era where you have these cloud services, which, you know, some of them like S3, you can, you can mock up locally, uh, or run something locally in a container. But at some point you're going to have like, you know, cloud specific hardware, right? Like you got TPS or something that maybe are forming some critical function in your, in your application. And you just can't really replicate that locally, but you still want to be able to develop against that in some capacity. So, you know, my, my feeling about where it's going to go is you'll end up having parts of your application running locally, but then you also have, uh, you know, containers or some other, uh, element that's sort of cohabitating with, uh, you know, either staging or, or testing or production services that you're, uh, that you're working with. >>So you can actually, um, you know, test against a really or realistic simulation or the actual, uh, surface that you're running against in production. Because I think it's just going to become untenable to keep emulating all of that stuff locally, or to have to like duplicate these, you know, and, you know, I guess you can argue about whether or not it's a good thing that, that everything's moving to these kind of more closed off cloud services, but, you know, the reality of situation is that's where it's going to go. And there's certain hardware that you're going to want in the cloud, especially if you're doing, you know, machine learning oriented stuff that there's just no way you're going to be able to run locally. Right. I mean, if you're, even if you're in a dev team where you have, um, maybe like a central machine where you've got like 10 or 20 GPU's in it, that's not something that you're going to be able to, to, to replicate locally. And so that's how I kind of see that, um, you know, containers easing that boundary between different application components is actually maybe more about co-location, um, or having different parts of your application run in different locations, on different hardware, you know, maybe someone on your laptop, maybe it's someone, you know, AWS or Azure or somewhere. Yeah. It'd be interesting >>To start seeing those boundaries blur right. Working local and working in the cloud. Um, and you might even, you might not even know where something is exactly is running right until you need to, you know, that's when you really care, but yeah. Uh, Johanas, what's your thoughts around that? I mean, I think we've, we've talked previously of, of, um, you know, hybrid kind of environments. Uh, but yeah. What, what's your thoughts around that? >>Um, so essentially, yeah, I think, I mean, we believe that the lines between cloud and local will also potentially blur, and it's actually not really about that distinction. It's just packaging your dev environment in a way and provisioning your dev environment in a way that you are what we call always ready to coat. So that literally, um, you, you have that for the, you described as, um, peace of mind that you can just start to be creative and start to be productive. And if that is a container potentially running locally and containers are at the moment. I think, you know, the vehicle that we use, um, two weeks ago, or one week ago actually stack blitz announced the web containers. So potentially some things, well, it's run in the browser at some point, but currently, you know, Docker, um, is the standard that enables you to do that. And what we think will happen is that these cloud-based or local, um, dev environments will be what we call a femoral. So it will be similar to CIS, um, that we are using right now. And it doesn't literally matter, um, where they are running at the end. It's just, um, to reduce friction as much as possible and decrease and yeah, yeah. Essentially, um, avoid or the hustle that is currently involved in setting up and also managing dev environments, um, going forward, which really slows down specifically larger teams. >>Yeah. Yeah. Um, I'm going to shift gears a little bit here. We have a question from the audience in chat, uh, and it's, I think it's a little bit two parts, but so far as I can see container first, uh, development, have the challenges of where to get safe images. Um, and I was going to answer it, but let me keep it, let me keep going, where to get safe images and instrumentation, um, and knowing where exactly the problem is happening, how do we provide instrument instrumentation to see exactly where a problem might be happening and why? So I think the gist of it is kind of, of everything is in a container and I'm sitting outside, you know, the general thought around containers is isolation, right. Um, so how do I get views into that? Um, whether debugging or, or, or just general problems going on. I think that's maybe a broader question around the, how you, you know, you have your local hosts and then you're running everything containers, and what's the interplay there. W what's your thoughts there? >>I tend to think that containers are underused interactively. I mean, I think in production, you have this mindset that there's sort of this isolated environment, but it's very, actually simple to drop into a shell inside of a container and use it like you would, you know, your terminal. Um, so if you want to install software that way, you know, through, through an image rather than through like Homebrew or something, uh, you can kind of treat containers in that way and you can get a very, um, you know, direct access to the, to the space in which those are running in. So I think, I think that's maybe the step one is just like getting rid of that mindset, that, that these are all, um, you know, these completely encapsulated environments that you can't interact with because it's actually quite easy to just Docker exec into a container and then use it interactively >>Yeah. A hundred percent. And maybe I'll pass, I'm going to pass this question. You drone, but maybe demystify containers a little bit when I talked about this on the last, uh, panel, um, because we have a question in the, in the chat around, what's the, you know, why, why containers now I have VMs, right? And I think there's a misunderstanding in the industry, uh, about what, what containers are, we think they're fair, packaged stuff. And I think Jacob was hitting on that of what's underneath the hood. So maybe drown, sorry, for a long way to set up a question of what, what, what makes up a container, what is a container >>Is a container? Well, I, I think, um, the sharpest and most accurate and most articulate definition, I was from Alice gold first, and I will probably misquote her, but she said something like containers are a bunch of capsulated processes, maybe running on a cookie on welfare system. I'm not sure about the exact definition, but I'm going to try and, uh, reconstitute that like containers are just processes that run on a Unix machine. And we just happen to put a bunch of, um, red tape or whatever around them so that they are kind of contained. Um, but then the beauty of it is that we can contend them as much, or as little as we want. We can go kind of only in and put some actual VM or something like firecracker around that to give some pretty strong angulation, uh, all we can also kind of decontam theorize some aspects, you know, you can have a container that's actually using the, um, the, um, the network namespace of the host. >>So that gives it an entire, you know, wire speed access to the, to the network of the host. Um, and so to me, that's what really interesting, of course there is all the thing about, oh, containers are lightweight and I can pack more of them and they start fast and the images can be small, yada yada, yada. But to me, um, with my background in infrastructure and building resilient, things like that, but I find really exciting is the ability to, you know, put the slider wherever I need it. Um, the, the, the ability to have these very light containers, all very heavily, very secure, very anything, and even the ability to have containers in containers. Uh, even if that sounds a little bit, a little bit gimmicky at first, like, oh, you know, like you, you did the Mimi, like, oh, I heard you like container. >>So I put Docker when you're on Docker. So you can run container for you, run containers. Um, but that's actually extremely convenient because, um, as soon as you stop building, especially something infrastructure related. So you challenge is how do you test that? Like, when we were doing.cloud, we're like, okay, uh, how do we provision? Um, you know, we've been, if you're Amazon, how do you provision the staging for us installed? How do you provision the whole region, Jen, which is actually staging? It kind of makes things complicated. And the fact that we have that we can have containers within containers. Uh, that's actually pretty powerful. Um, we're also moving to things where we have secure containers in containers now. So that's super interesting, like stuff like a SIS box, for instance. Um, when I saw that, that was really excited because, uh, one of the horrible things I did back in the days as Docker was privileged containers, precisely because we wanted to have Docker in Docker. >>And that was kind of opening Pandora's box. That's the right, uh, with the four, because privileged containers can do literally anything. They can completely wreck up the machine. Um, and so, but at the same time, they give you the ability to run VPNs and run Docker in Docker and all these cool things. You can run VM in containers, and then you can list things. So, um, but so when I saw that you could actually have kind of secure containers within containers, like, okay, there is something really powerful and interesting there. And I think for folks, well, precisely when you want to do development in containers, especially when you move that to the cloud, that kind of stuff becomes a really important and interesting because it's one thing to have my little dev thing on my local machine. It's another thing when I want to move that to a swarm or Kubernetes cluster, and then suddenly even like very quickly, I hit the wall, which is, oh, I need to have containers in my containers. Um, and then having a runtime, like that gets really intense. >>Interesting. Yeah, yeah, yeah. And I, and jumping back a bit, um, yeah, uh, like you said, drum at the, at the base of it, it containers just a, a process with, with some, uh, Abra, pardon me, operating constructs wrapped around it and see groups, namespaces those types of things. But I think it's very important to, for our discussion right. Of, uh, developers really understanding that, that this is just the process, just like a normal process when I spin up my local bash in my term. Uh, and I'm just interacting with that. And a lot of the things we talk about are more for production runtimes for securing containers for isolating them locally. I don't, I don't know. I'll throw the question out to the panel. Is that really relevant to us locally? Right. Do we want to pull out all of those restrictions? What are the benefits of containers for development, right. And maybe that's a soft question, but I'd still love to hear your thoughts. Maybe I'll kick it over to you, Katie, would you, would you kick us off a little bit with that? >>I'll try. Um, so I think when, again, I was actually thinking of the previous answers because maybe, maybe I could do a transition here. So, interesting, interesting about containers, a piece of trivia, um, the secrets and namespaces have been within the Linux kernel since 2008, I think, which just like more than 10 years ago, hover containers become popular in the last years. So I think it's, it's the technology, but it's about the organization adopting this technology. So I think why it got more popular now is because it became the business differentiator organizations started to think, how can I deliver value to my customers as quickly as possible? So I think that there should be this kind of two lane, um, kind of progress is the technology, but it's at the same time organization and cultural now are actually essential for us to develop, uh, our applications locally. >>Again, I think when it's a single application, if you have just one component, maybe it's easier for you to kind of run it locally, have a very simple testing environment. Sufficient is a container necessary, probably not. However, I think it's more important when you're thinking to the bigger picture. When we have an architecture that has myriads of microservices at the basis, when it's something that you have to expose, for example, an API, or you have to consume an API, these are kind of things where you might need to think about a lightweight set up within the containers, only local environment to make sure that you have at least a similar, um, environment or a configuration to make sure that you test some of the expected behavior. Um, I think the, the real kind of test you start from the, the dev cluster will like the dev environment. >>And then like for, for you to go to staging and production, you will get more clear into what exactly that, um, um, configuration should be in the end. However, at the same time, again, it's, it's more about, um, kind of understanding why you continue to see this, the thing, like, I don't say that you definitely need containers at all times, but there are situations when you have like, again, multiple services and you need to replicate them. It's just the place to, to, to work with these kind of, um, setups. So, um, yeah, really depends on what you're trying to develop here. Nothing very specific, unfortunately, but get your product and your requirements are going to define what you're going to work with. >>Yeah, no, I think that's a great answer, right. I think one of the best answers in, in software engineering and engineering in general as well, it depends. Right. It's things are very specific when we start getting down to the details, but yeah, generally speaking, you know, um, I think containers are good for development, but yeah, it depends, right. It really depends. Is it helping you then? Great. If it's hindering you then, okay. Maybe think what's, what's the hindrance, right. And are containers the right solution. I agree. 110% and, >>And everything. I would like absurd this too as well. When we, again, we're talking about the development team and now we have this culture where we have the platform and infrastructure team, and then you have your engineering team separately, especially when the regulations are going to be segregated. So, um, it's quite important to understand that there might be a, uh, a level of up-skilling required. So pushing for someone to use containers, because this is the right way for you to develop your application might be not, uh, might not be the most efficient way to actually develop a product because you need to spend some time to make sure that the, the engineering team has the skills to do so. So I think it's, it's, again, going back to my answers here is like, truly be aware of how you're trying to develop how you actually collaborate and having that awareness of your platform can be quite helpful in developing your, uh, your publication, the more importantly, having less, um, maybe blockers pushing it to a production system. >>Yeah, yeah. A hundred percent. Yeah. The, uh, the cultural issue is, is, um, within the organization, right. Is a very interesting thing. And it, and I would submit that it's very hard from top down, right. Pushing down tools and processes down to the dev team, man, we'll just, we'll just rebel. It usually comes from the bottom up. Right. What's working for us, we're going to do right. And whether we do it in the shadows and don't let it know, or, or we've conformed, right. Yeah. A hundred percent. Um, interesting. I would like to think a little bit in the future, right? Like, let's say, I don't know, two, three years from now, if, if y'all could wave a and I'm from Texas. So I say y'all, uh, if you all could wave a magic wand, what, what, what would that bring about right. What, what would, what would be the best scenario? And, and we just don't have to say containers. Right. But, you know, what's the best development environment and I'm going to kick it over to you, Jacob. Cause I think you hinted at some of that with some hybrid type of stuff, but, uh, yeah. Implies, they need to keep you awake. You're, you're, you're, uh, almost on the other side of the world for me, but yeah, please. >>Um, I think, you know, it's, it's interesting because you have this technology that you've been, that's been brought from production, so it's not, um, necessarily like the right or the normal basis for development. So I think there's going to be some sort of realignment or renormalization in terms of, uh, you know, what the, what the basis and the abstractions that we're using on a daily basis are right. Like images and containers as they exist now are really designed for, um, for production use cases. And, and in terms of like, even even the ergonomics of opening a shell inside a container, I think is something that's, um, you know, not as polished or not as smooth as it could be because they've come from production. And so I think it's important, like not to, not to have people look at, look at the technology as it exists now and say like, okay, this is slightly rough around the edges, or it wasn't designed for this use case and think, oh, there's, you know, there's never any way I could use this for, for my development of workflows. >>I think it's, you know, it's something Docker's exploring now with, uh, with the, uh, dev containers, you know, it's, it's a new, and it's an experimental paradigm and it may not be what the final picture looks like. As, you know, you were saying, there's going to be kind of a baseline and you'll add features to that or iterate on that. Um, but I think that's, what's interesting about it, right? Cause it's, there's not a lot of things as developers that you get to play with that, um, that are sort of the new technology. Like if you're talking about things you're building to ship, you want to kind of use tried and true components that, you know, are gonna, that are going to be reliable. But I think containers are that interesting point where it's like, this is an established technology, but it's also being used in a way now that's completely different than what it was designed for. And, and, you know, as hackers, I think that's kind of an interesting opportunity to play with it, but I think, I think that's, what's going to happen is you're just going to see kind of those production, um, designed, uh, knobs kind of sanded down or redesigned for, for development. So that's kind of where I see it going. >>Yeah. Yeah. And I think that's what I was trying to hint out earlier is like, um, yeah, just because all these things are there, does it actually mean we need them locally? Right. Do they make sense? I, I agree. A hundred percent, uh, anybody else drawn? What are your thoughts around that? And then, and then, uh, I'll probably just ask all of you. I'd love to hear each of your thoughts of the future. >>I had a thought was maybe unrelated, but I was kind of wondering if we would see something on the side of like energy efficiency in some way. Um, and maybe it's just because I've been thinking a lot about like climate change and things like that recently, and trying to reduce like the, uh, the energy use energy use and things like that. Perhaps it's also because I recently got a new laptop, which on paper is super awesome, but in practice, as soon as you try to have like two slack tabs and a zoom call, you know, it's super fast, both for 30 seconds. And after 30 seconds, it blows its thermal budget and it's like slows down to a crawl. And I started to think, Hmm, maybe, you know, like before we, we, we were thinking about, okay, I don't have that much CPU available. So you have to be kind of mindful about that. >>And now I wonder how are we going to get in something similar to that, but where you try to save CPU cycles, not just because you don't have that many CPU cycles, but more because you know, that you can't go super fast for super long when you are on one of these like small laptops or tablets or phones, like you have this demo budget to take into account. And, um, I wonder if, and how like, is there something where goaltenders can do some things here? I guess it can be really interesting if they can do some the equivalent of like Docker top and Docker stats. And if I could see, like how much what's are these containers using, I can already do that with power top on Linux, for instance, like process by process. So I'm thinking I could see what's the power usage of, of some containers. Um, and I wonder if down the line, is this going to be something useful or is this just silly because we can just masquerade CPU usage for, for Watson and forget about it. >>Yeah. Yeah. It was super, super interesting, uh, perspective for sure. I'm going to shut up because I want to, I want to give, make sure I give Johannes and Katie time. W w what are your thoughts of the future around, let's just say, you know, container development in general, right? You want, you want to start absolutely. Oh, honest, Nate. Johns wants more time. I say, I'll try not to. Beneficiate >>Expensive here, but, um, so one of the things that we've we've touched upon earlier in the panel was multicloud strategy. And I was reading one of the data reports from it was about the concept of Kubernetes from gamer Townsville. But what is working for you to see there is that more and more organizations are thinking about multicloud strategy, which means that you need to develop an application or need an infrastructure or a component, which will allow you to run this application bead on a public cloud bead, like locally in a data center and so forth. And here, when it comes to this kind of, uh, maybe problems we come across open standards, this is where we require something, which will allow us to execute our application or to run our platform in different environments. So when you're thinking about the application or development of the application, one of the things that, um, came out in 2019 at was the Oakland. >>Um, I wish it was Kybella, which is a, um, um, an open application model based application, which allows you to describe the way you would like your service to be executed in different environments. It doesn't need to be well developed specifically for communities. However, the open application model is specialized. So specialized tries to cover multiple platforms. You will be able to execute your application anywhere you want it to. So I think that that's actually quite important because it completely obstructs what is happening underneath it, completely obstructs notions, such as containers, uh, or processes is just, I want this application and I want to have this kind of behavior is so example of, to scale in this conditions or to, um, to be exposed for these, uh, end points and so forth. And everything that I would like to mention here is that maybe this transcends again, the, uh, the logistics of the application development, but it definitely will impact the way we run our applications. >>So one of the biggest, well, one of the new trends that is kind of gaining momentum now has been around Plaza. And this is again, something which is trying to present what we have the on containers. Again, it's focusing on the, it's kind of a cyclical, um, uh, action movement that we have here. When we moved from the VMs to containers, it was smaller footprint. We want like better execution, one, this agnosticism of the platforms. We have the same thing happening here with Watson, but again, it consents a new, um, uh, kind of, well, it teaches in you, uh, in new climax here, where again, we shrink the footprint of the cluster. We have a better isolation of all the services. We have a better trend, like portability of how services and so forth. So there is a great potential out there. And again, like why I'm saying this is some of these technologies are gonna define the way we're gonna do our development of the application on our local environment. >>That's why it's important to kind of maybe have an eye there and maybe see if some of those principles of some of those technologies we can bring internally as well. And just this, like a, a final thought here, um, security has been mentioned as well. Um, I think it's something which has been, uh, at the forefront, especially when it comes to containers, uh, especially when it comes to enterprise organizations and those who are regulated, which I feel come very comfortable to run their application within a VM where you have the full isolation, you can do what we have complete control of what's happening inside that compute. So, um, again, security has been at the forefront at the moment. So I know it has mentioned in the panel before. I'd like to mention that we have the security white paper, which has been published. We have the software supply chain, white paper as well, which twice to figure out or define some of these good practices as well, again, which you can already apply from your development environment and then propagate them to production. So I'm just going to leave, uh, all of these. That's all. >>That's awesome. And yeah, well, while is very, very interesting. I saw the other day that, um, and I forget who it was, maybe, maybe all can remember, um, you know, running, running the node, um, engine inside of, you know, in Walzem inside of a browser. Right. And, uh, at first glance I said, well, we already have a JavaScript execution engine. Right. And it's kind of like Docker and Docker. So you have, uh, you know, you have the browser, then, then you have blossom and then you have a node, you know, a JavaScript runtime. And, and I didn't understand was while I was, um, you know, actually executing is JavaScript and it's not, but yeah, it's super interesting, super powerful. I always felt that the browser was, uh, Java's what write once run anywhere kind of solution, right. That never came about, they were thinking of set top, uh, TV boxes and stuff like that, which is interesting. >>I don't know, you'll some of the history of Java, but yeah. Wasm is, is very, I'm not sure how to correctly pronounce it, but yeah, it's extremely interesting because of the isolation in that boxing. Right. And running powerful languages that were used to inside of a more isolated environment. Right. And it's almost, um, yeah, it's kind of, I think I've mentioned it before that the containers inside of containers, right. Um, yeah. So Johannes, hopefully I gave you enough time. I delayed, I delayed as much as I can. My friend, you better, you better just kidding. I'm just kidding, please, please. >>It was by the way, stack let's and they worked together with Google and with Russell, um, developing the web containers, it's called there's, it's quite interesting. The research they're doing there. Yeah. Yeah. I mean, what we believe and I, I also believe is that, um, yeah, probably somebody is doing to death environments, what Docker did to servers and at least that good part. We hope that somebody will be us. Um, so what we mean by that is that, um, we think today we are still somehow emotionally attached to our dev environments. Right. We give them names, we massage them over time, which can also have its benefits, but it's, they're still pets in some way. Right. And, um, we believe that, um, environments in the future, um, will be treated similar like servers today as automated resources that you can just spin up and close down whenever you need them. >>Right. And, um, this trend essentially that you also see in serverless, if you look at what kind of Netlify is doing a bit with preview environments, what were sellers doing? Um, there, um, we believe will also arrive at, um, at Steph environments. It probably won't be there tomorrow. So it will take some time because if there's also, you know, emotion involved into, in that, in that transition, but ultimately really believe that, um, provisioning dev environments also in the cloud allows you to leverage the power of the cloud and to essentially build all that stuff that you need in order to work in advance. Right? So that's literally either command or a button. So either, I don't know, a command that spins up your local views code and SSH into, into a container, or you do it in a browser, um, will be the way that professional development teams will develop in the future. Probably let's see in our direction of document, we say it's 2000 to 23. Let's see if that holds true. >>Okay. Can we, can, we let's know. Okay. Let's just say let's have a friendly bet. I don't know that's going to be closed now, but, um, yeah, I agree. I, you know, it's my thought around is it, it's hard, right? Th these are hard. And what problems do you tackle first, right? Do you tackle the day, one of, uh, you know, of development, right. I joined a team, Hey, here's your machine? And you have Docker installed and there you go, pull, pull down your environment. Right. Is that necessarily just an image? You know, what, what exactly is that sure. Containers are involved. Right. But that's, I mean, you, you've probably all gone through it. You joined a team, new project, even open-source project, right there. There's a huge hurdle just to get everything configured, to get everything installed, to get it up and running, um, you know, set aside all understanding the code base. >>Cause that's a different issue. Right. But just getting everything running locally and to your point earlier, Jacob of around, uh, recreating, local production cues and environments and, you know, GPS or anything like that, right. Is extremely hard. You can't do a lot of that locally. Right. So I think that's one of the things I'd love to see tackled. And I think that's where we're tackling in dev environments, uh, with Docker, but then now how do you become productive? Right. And where do we go from there? And, uh, and I would love to see this kind of hybrid and you guys have been all been talking about it where I can, yes. I have it configured everything locally on my nice, you know, apple notebook. Right. And then, you know, I go with the family and we go on vacation. I don't want to drag this 16 inch, you know, Mac laptop with me. >>And I want to take my nice iPad with the magic keyboard and all the bang stuff. Right. And I just want to fire up and I pick up where I left off. Right. And I keep coding and environment feels, you know, as much as it can that I'm still working at backup my desktop. I think those, those are very interesting to me. And I think reproducing, uh, the production running runtime environments as close as possible, uh, when I develop my, I think that's extremely powerful, extremely powerful. I think that's one of the hardest things, right. It's it's, uh, you know, we used to say, we, you debug in production. Right. We would launch, right. We would do, uh, as much performance testing as possible. But until you flip that switch on a big, on a big site, that's where you really understand what is going to break. >>Right. Well, awesome. I think we're just about at time. I really, really appreciate everybody joining me. Um, it's been a pleasure talking to all of you. We have to do this again. If I, uh, hopefully, you know, I I'm in here in America and we seem to be doing okay with COVID, but I know around the world, others are not. So my heart goes out to them, but I would love to be able to get out of here and come see all of you and meet you in person, maybe break some bread together. But, um, again, it was a pleasure talking to you all, and I really appreciate you taking the time. Have a good evening. Cool. >>Thanks for having us. Thanks for joining us. Yes.

Published Date : May 28 2021

SUMMARY :

Um, if you come to the main page on the website and you do not see the chat, go ahead and click And I have been, uh, affiliated way if you'd asked me to make sure that, Glad to have you here. which is probably also the reason why you Peter reached out and invited me here. Can you tell everybody who you are and a little bit about yourself? So kind of, uh, how do we say same, same team, different company or something like that? Good to see you. bit more powerful hardware or uh, you know, maybe a software that I can't run locally. I really appreciate you all joining me Like if I go back to the, kind of the first, uh, you know, but in a container that you control from your browser and, and many other things So I guess another question is, you know, should we be developing So I think, you know, even if you have a super powerful computer, I think there's still value in, With, um, you know, and how do you do that? of view, you do not need to take care anymore about all the hassle around setups It includes essentially all the tools you need in order to be productive databases and so on. It might be too to, uh, har you know, to, to two grand of the word. much as possible the production or even the staging environment to make sure that when you deploy your application, I think there has been a lot of focus in the community to develop the tool, to actually give you the right tool to run you have in production, because there's going to define some of the structures with the tool and you're going to have internally, but what's your thoughts? So you know that like you're gonna have PRI iMacs out of my cold dead hands or something like that. And I think there is also something interesting to do here with you know, that like with their super nice IDE and everything is set up, but they feel kind of lost. And that makes me feel a little bit, you know, as this kind of old code for movies where So I think, you know, talking about, uh, dev environments that, that Docker's coming out with, Of, uh, of, you know, even just 10 microservices that are in different get repos boundary or, or, um, you know, a sub repo boundary. all of that stuff locally, or to have to like duplicate these, you know, and, of, um, you know, hybrid kind of environments. I think, you know, the vehicle that we use, I'm sitting outside, you know, the general thought around containers is isolation, that, that these are all, um, you know, these completely encapsulated environments that you can't interact with because because we have a question in the, in the chat around, what's the, you know, why, why containers now I have you know, you can have a container that's actually using the, um, the, um, So that gives it an entire, you know, wire speed access to the, to the network of the Um, but that's actually extremely convenient because, um, as soon as you And I think for folks, well, precisely when you want to do development in containers, um, yeah, uh, like you said, drum at the, at the base of it, it containers just a, So I think that there should be this kind of two Again, I think when it's a single application, if you have just one component, maybe it's easier for you to kind And then like for, for you to go to staging and production, you will get more clear into what exactly that, down to the details, but yeah, generally speaking, you know, um, So pushing for someone to use containers, because this is the right way for you to develop your application Cause I think you hinted at some of that with some hybrid type of stuff, but, uh, a shell inside a container, I think is something that's, um, you know, not as polished or I think it's, you know, it's something Docker's exploring now with, uh, with the, I'd love to hear each of your thoughts of the So you have to be kind of mindful cycles, but more because you know, that you can't go super fast for super long when let's just say, you know, container development in general, right? But what is working for you to see there is that more and more organizations way you would like your service to be executed in different environments. So one of the biggest, well, one of the new trends that is kind of gaining momentum now has been around Plaza. again, which you can already apply from your development environment and then propagate them to production. um, and I forget who it was, maybe, maybe all can remember, um, you know, So Johannes, hopefully I gave you enough time. as automated resources that you can just spin up and close down whenever really believe that, um, provisioning dev environments also in the cloud allows you to to get everything installed, to get it up and running, um, you know, set aside all in dev environments, uh, with Docker, but then now how do you become productive? It's it's, uh, you know, we used to say, we, you debug in production. But, um, again, it was a pleasure talking to you all, and I really appreciate you taking the time. Thanks for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Brian Bouchard, Alacrinet Consulting Services | IBM Think 2021


 

>> From around the globe, It's theCUBE. With digital coverage of IBM Think 2021, brought to you by IBM. >> Hi, welcome back to theCUBE's coverage of IBM Think 2021 virtual. I'm John Furrier host of the CUBE. We got a great guest here. Brian Bouchard is the co-founder president and CEO of Alacrinet. Brian great to see you remoting in all the way from Puerto Rico to Palo Alto. >> That's right. >> Great to see you. >> Thanks for First of all, thanks John, for having me. I really appreciate the opportunity. >> Yeah, great to see you. Thanks for coming on. First of all, before we get into what you guys do and and how this all ties in to Think. What do you guys do at Alacrinet? Why the name? A it's good you're at the top of the list and alphabetically, but tell us the, the, the the secret behind the name and what you guys do. >> So, first of all Alacrinet is based on the root word alacrity which means a prompt and willing, a prompt a joyous prompt to, excuse me, to achieve a common goal. So we ultimately are a network of individuals with the traits of alacrity. So Alacrinet. So that's our name. >> Great. So what's your relationship with IBM and how you guys have been able to leverage the partnership program in the marketplace? Take us through the relationship. >> So, well, first of all Alacrinet is a platinum IBM business partner and it was awarded recently the 2020 IBM North American partner of the year award. And we were selected amongst 1600 other business partners across North America. We've been actually a consulting, an IT consulting company for almost 20 years now. And we were founded in 2002 in Palo Alto and we have focused specifically on cyber security since 2013. And then as part, go ahead. >> What are some of the things that you guys are working on? Because obviously, you know, the business is hot right now. Everyone's kind of looking at COVID saying we're going to double down on the most critical projects and no time for leisurely activities when it comes to IT. And cloud scale projects, you know mission critical stuff's happening what are you guys working on? >> So we're, we're focused on cybersecurity, our security services really compliment IBM's suite of security solutions and cover the full spectrum from our research and penetration testing, which helps identify vulnerabilities before a breach occurs. And we also have managed security services which helps prevent, detect and remediate attacks in real time. And then finally, we also have a security staffing division and a software resell division, which kind of rounds out the full amount of offerings that we have to provide protection for our clients. >> What are some of the biggest challenges you guys have as a business, and how's IBM helping you address those? >> Well, as you know, John, we all know the importance of cybersecurity in today's world, right? So it's increasing in both demand and importance and it's not expected to wane anytime soon. Cyber attacks are on the rise and there's no there's no expected end in sight to this. And in fact, just this week on 60 minutes, Jay Powell, the chairman of the federal reserve board he noted that cyber attacks were the number one threat to the stability of the US economy. Also this week, a public school in Buffalo New York was hacked with ransomware and the school you know, this, the school district is just contemplating you know, paying the ransom to the hackers. So there's literally thousands of these attacks happening every day, whether it's in local school district or a state government, or an enterprise even if you don't hear about them, they're happening In adding to the complexity that the cyber attackers pose is the complexity of the actual cybersecurity tools themselves. There isn't a single solution provider or a single technology, that can ensure a company's security. Our customers need to work with many different companies and disconnected tools and processes to build an individual strategy that can adequately protect their organizations. >> You know, I love this conversation whenever I talk to practitioners on cybersecurity, you know that first of all, they're super smart, usually cyber punks and they also have some kinds of eclectic backgrounds, but more importantly is that there's different approaches in terms of what you hear. Do you, do you put more if you add more firefighters, so to speak to put out the fires and solve the problems? Or do you spend your time preventing the fires from happening in the first place? You know, and you know, the buildings are burning down don't make fire fire, don't make wood make fire resistance, you know, more of a priority. So there's less fires needing firefighters So it's that balance. You throw more firefighters at the problem or do you make the supply or the material the business fireproof, what's your take on that? >> Yeah, well, it kind of works both ways. I mean, we've seen customers want it. They really want choice. They want to, in some cases they want to be the firefighter. And in some cases they want the firefighter to come in and solve their problems. So, the common problem set that we're seeing with our that our customers encounter is that they struggle one, with too many disparate tools. And then they also have too much data being collected by all these disparate tools. And then they have a lack of talent in their environment to manage their environments. So what we've done at Alacrinet is we've taken our cybersecurity practice and we've really specifically tailored our offerings to address these core challenges. So first, to address the too many disparate tools problem, we've been recommending that our clients look at security platforms like the IBM Cloud Pak for security the IBM Cloud Pak for security is built on a security platform that allows interoperability across various security tools using open standards. So our customers have been responding extremely positively to this approach and look at it as a way to future-proof their investments and begin taking advantage of interoperability with, and, tools integration. >> How about where you see your business going with this because, you know, there's not a shortage of need or demand How are you guys flexing with the market? What's the strategy? Are you going to use technology enablement? You're going to more human driven. Brian, how do you see your business unfolding? >> Well, actually really good. We're doing very well. I mean, obviously we made the, the top the business partner for IBM in 2020. They have some significant growth and a lot of interest. I think we really attack the market in a, in a with a good strategy which was to help defragment the market if you will. There's a lot of point solutions and a lot of point vendors that various, you know, they they spent specialized in one piece of the whole problem. And what we've decided to do is find them the highest priority list, every CSO and CIO has a tick list. So that how that, you know, first thing we need we need a SIM, we need an EDR, we need a managed service. We need, what's the third solution that we're doing? So we, we need some new talent in-house. So we actually have added that as well. So we added a security staffing division to help that piece of it as well. So to give you an idea of the cybersecurity market size it was valued at 150 billion in 2019 and that is expected to grow to 300 billion by 2027. And Alacrinet is well-positioned to consolidate the many fragmented aspects of the security marketplace and offer our customers more integrated and easier to manage solutions. And we will continue to help our customers select the best suite of solutions to address all types of cybersecurity, cybersecurity threats. >> You know, it's it's such a really important point you're making because you know, the tools just have piled up in the tool shed. I call it like that. It's like, it's like you don't even know what's in there anymore. And then you've got to support them. Then the world's changed. You get cloud native, the service areas increasing and then the CSOs are also challenged. Do I, how many CLAWs do I build on? Do I optimize my development teams for AWS or Azure? I mean, now that's kind of a factor. So, you have all this tooling going on they're building their own stuff they're building their own core competency. And yet the CSO still needs to be like maintaining kind of like a relevance list. That's almost like a a stock market for the for the products. You're providing that it sounds like you're providing that kind of service as well, right? >> Yeah, well, we, we distill all of the products that are out there. There's thousands of cybersecurity products out there in the marketplace and we kind of do all that distillation for the customer. We find using, you know, using a combination of things. We use Forrester and Gartner and all the market analysts to shortlist our proposed solutions that we offer customers. But then we also use our experience. And so since 2013, we've been deploying these solutions across organizations and corporations across America and we've, we've gained a large body of experience and we can take that experience and knowledge to our customers and help them, you know, make make some good decisions. So they don't have to, you know, make them go through the pitfalls that many companies do when selecting these types of solutions. >> Well congratulations, you've got a great business and you know, that's just a basic search making things easier for the CSO, more so they can be safe and secure in their environment. It's funny, you know, cyber warfare, you know the private companies have to fight their own battles got to build their own armies. Certainly the government's not helping them. And then they're confused even with how to handle all this stuff. So they need, they need your service. I'm just curious as this continues to unfold and you start to see much more of a holistic view, what's the IBM angle in here? How, why are you such a big partner of theirs? Is it because their customers are working with you they're bringing you into business? Is it because you have an affinity towards some of their products? What's the connection with IBM? >> All of the above. (chuckles) So I think it probably started with our affinity to IBM QRadar product. And we have, we have a lot of expertise in that and that solution. So that's, that's where it started. And then I think IBM's leadership in this space has been remarkable, really. So like what's happening now with the IBM Cloud Pak for security you know, building up a security platform to allow all these point solutions to work together. That's the roadmap we want to put our customers on because we believe that's the that's the future for this, this, this marketplace. >> Yeah. And the vision of hybrid cloud having that underpinning be with Red Hat it's a Linux kernel, model of all things >> Yeah. Super NetEase. >> Locked in >> It's portable, multiple, you can run it on Azure. IBM Cloud, AWS. It's portable. I mean, yeah, all this openness, as you probably know cyber security is really a laggard in the security in the information technology space as far as adopting open standards. And IBM is I think leading that charge and you'll be able to have a force multiplier with the open standards in this space. >> Open innovation with open source is incredible. I mean, if you, if, if if open source can embrace a common platform and build that kind of control plane and openness to allow thriving companies to just build out then you have an entire hybrid distributed architecture. >> Yeah. Well, I think companies want to use the best in breed. So when we, when we show these solutions to customers they want the best in breed. They always say, I don't, when it comes to security they don't want second best. They want the best it's out there because they're securing their crown jewels. So that makes sense. So the problem with, you know having all these different disparate solutions that are all top in their category none of them talk to each other. So we need to address that problem because without that being solved, this is just going to be more it's going to compound the complexity of the problems we solve day to day. >> Awesome. Congratulations, Brian, great story. You know entrepreneur built a great business over the years. I think the product's amazing. I think that's exactly what the market needs and just shows you what the ecosystem is all about. This is the power of the ecosystem. You know, a thousand flowers are blooming. You got a great product. IBM is helping as well. Good partnership, network effects built in and and still a lot more to do. Congratulations. >> Absolutely. >> Okay. >> Thank you very much >> Brian Bouchard >> Made my impression. I appreciate that >> Thanks for coming on theCUBE Appreciate it. I'm John Furrier with IBM thinks 2021 virtual coverage. Thanks for watching. (outro music plays)

Published Date : May 12 2021

SUMMARY :

brought to you by IBM. Brian great to see you remoting in I really appreciate the opportunity. of the list and alphabetically, the root word alacrity with IBM and how you partner of the year award. that you guys are working on? out the full amount of that the cyber attackers pose and solve the problems? So first, to address the too because, you know, there's So to give you an idea of because you know, the and Gartner and all the market analysts to and you know, that's just a basic search All of the above. having that underpinning be with Red Hat in the information and openness to allow thriving So the problem with, you know and just shows you what I appreciate that I'm John Furrier with IBM

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

JohnPERSON

0.99+

Jay PowellPERSON

0.99+

Brian BouchardPERSON

0.99+

BrianPERSON

0.99+

John FurrierPERSON

0.99+

Puerto RicoLOCATION

0.99+

Palo AltoLOCATION

0.99+

AWSORGANIZATION

0.99+

2002DATE

0.99+

2020DATE

0.99+

Alacrinet Consulting ServicesORGANIZATION

0.99+

North AmericaLOCATION

0.99+

AmericaLOCATION

0.99+

2013DATE

0.99+

2019DATE

0.99+

AlacrinetORGANIZATION

0.99+

firstQUANTITY

0.99+

this weekDATE

0.99+

thousandsQUANTITY

0.99+

third solutionQUANTITY

0.99+

60 minutesQUANTITY

0.99+

150 billionQUANTITY

0.99+

both waysQUANTITY

0.99+

one pieceQUANTITY

0.99+

bothQUANTITY

0.98+

Buffalo New YorkLOCATION

0.98+

1600 other business partnersQUANTITY

0.98+

ForresterORGANIZATION

0.98+

GartnerORGANIZATION

0.98+

almost 20 yearsQUANTITY

0.97+

2027DATE

0.97+

300 billionQUANTITY

0.97+

2021DATE

0.97+

Think 2021COMMERCIAL_ITEM

0.96+

Linux kernelTITLE

0.95+

secondQUANTITY

0.93+

single technologyQUANTITY

0.93+

ThinkORGANIZATION

0.92+

a thousand flowersQUANTITY

0.92+

COVIDORGANIZATION

0.92+

FirstQUANTITY

0.9+

AzureTITLE

0.9+

CUBEORGANIZATION

0.84+

oneQUANTITY

0.83+

USLOCATION

0.83+

Red HatTITLE

0.82+

IBM CloudORGANIZATION

0.81+

single solution providerQUANTITY

0.77+

cybersecurity productsQUANTITY

0.75+

federal reserve boardORGANIZATION

0.74+

PakTITLE

0.73+

QRadarTITLE

0.71+

theCUBEORGANIZATION

0.64+

todayDATE

0.63+

Cloud PakTITLE

0.62+

NetEaseORGANIZATION

0.56+

theseQUANTITY

0.53+

yearTITLE

0.52+

Matt Hicks, Red Hat | Red Hat Summit 2021 Virtual Experience


 

>>mhm Yes. Hello and welcome back to the cubes coverage of red hat summit 2021 virtual. I'm john for your host of the cube and cube coverage here with matt Hicks. Executive vice president of products and technologies at red hat cuba lum I've been on many times, knows the engineering side now running all the process of technologies matt. Great to see you. Thanks for coming on remote. I wish we were in real real life in person. I RL but doing it remote again. Thanks for coming on. >>Hey, thanks thanks for having me today. >>Hey, so what a year you know, um, I was just talking to a friend and another interview with the red hat colleagues. Chef on your team in 2019 I interviewed Arvin at IBM right before he bought red hat and you smile on his face and he wasn't even ceo then um, he is such a big fan of cloud native and you guys have been the engine underneath the hood if you will of IBM this transformation huge push now and with Covid and now with the visibility of the post Covid, you're seeing cloud Native at scale with modern applications just highly accelerated across the board In almost every industry, every vertical. This is a very key trend. You guys at the, at the center of it always have been, we've been covering you for many years, interesting time and so now you guys are really got the, got the formula at red hat, take us through the key transit you see on this wave for enterprises and how is red hat taking that, taking that through? >>Yeah, no, absolutely. It has been, it's been a great ride actually. I remember a couple years ago standing on stage with Arvin prior to the acquisition. So it's been uh, it's been a world one but I think if we look at Really would emerge in 2020, we've seen three trends that we hope we're gonna carry through in 2021 just in a better and better year for that. That the first is open hybrid cloud is really how customers are looking to adapt to change. They have to use what they have um assets they have today. On premise, we're seeing a lot of public cloud adoption that blend of being hybrid is just, it is a reality for how customers are having to deliver a edge computing I think is another area I would say uh the trend is really not going to be a fad or a new, you know, great texture. Um the capabilities of computing at the edge, whether that is automotive vehicles, radio access network capabilities to five G. It's pretty astounding at this point. So I think we're gonna see a lot of pushing edge computing for computing, getting closer to users. Uh but then also the choice aspect we're seeing with Ceos, we often talk about technology is choice, but I think the model of how they want to consume technology has been another really strong trend in 2020. Uh We look at this really is being able to deliver a cloud managed services in addition to technology that ceos around themselves. But those, those will probably be the three that stand out to me at least in 2020 we've seen, >>so matt take us through in your minds and red hats, perspective the workloads that are going to be highlighted in this cloud native surge that's happening. We're seeing it everywhere. You mentioned edge industrial edge to consumer Edge to lightweight, edge, massive new workloads. So take us through how you see kind of the existing workloads evolving and potentially new workloads that emerging. >>Yeah. So I think um you know first when you talk about edge workloads a big umbrella but if you look at data driven workloads, especially in the machine learning artificial intelligence spectrum of that, that's really critical. And a reason that those workloads are important is five G. Aside for now when you're running something at the edge you have to also be able to make decisions pretty well at the edge. And that that is that's where your data is being generated and the ability to act on that closely. Whether that's executing machine learning models or being able to do more than that with A I. That's going to be a really really critical workload. Uh huh. Coupled to that, we will see I think five G. Change that because you're going to see more blending in terms of what can you draw back to uh closer to your data center to augment that. So five G will shift how that's built but data driven workloads are going to be huge then I think another area will see is how you propagate that data through environment. Some Kafka has been a really popular technology will actually be launching a service in relation to that. But being able to get that data at the edge and bring it back to locations where you might do more traditional processing, that's going to be another really key space. Um and then we'll still have to be honest, there is still a tremendous amount of work loads out there that just aren't going to get rebuilt. And So being able to figure out how can you make them a little more cloud native? You know, the things your companies have run on for the last 20 years, being able to step them closer to cloud native, I think it's going to be another critical focus because he can't just rewrite them all in one phase and you can't leave them there as well. So being able to bridge shadow B T to >>what's interesting if folks following red hat, No, no, you guys certainly at the tech chops you guys have great product engineering staff been doing this for a long time. I mean the common Lennox platform that even the new generation probably have to leave it load limits on the server anymore. You guys have been doing this hybrid environment in I T for I T Sloan for decades. Okay. In the open, so, you know, it's servers, virtualization, you know, private, public cloud infrastructures and it's been around, we've been covering it in depth as you know, but that's been, that's a history. But as you go from a common Lennox platform into things with kubernetes as new technologies and this new abstraction layers, new control plane concept comes to the table. This need for a fully open platform seems to be a hot trend this year. >>How do you >>describe that? Can you take a minute to explain what this is, this is all about this new abstraction, this new control plane or this open hybrid cloud as you're calling? What is this about? What does it mean? >>Yeah, no, I'll do a little journey that she talked about. Yeah. This has been our approach for almost a decade at this point. And it started, if you look at our approach with Lennox and this was before public clouds use migrants existed. We still with Lennox tried to span bare metal and virtualized environments and then eventually private and public cloud infrastructure as well. And our goal there was you want to be able to invest in something, um, and in our world that's something that's also open as in Lennox but be able to run it anywhere. That's expanded quite a bit. That was good for a class of applications that really got it started. That's expanded now to kubernetes, for example, kubernetes is taking that from single machines to cluster wide deployments and it's really giving you that secure, flexible, fast innovation backbone for cloud native computing. And the balance there is just not for cloud native, we've got to be able to run traditional emerging workloads and our goal is let those things run wherever rail can. So you're really, you're based on open technologies, you can run them wherever you have resources to run. And then I think the third part of this for us is uh, having that choice and ability to run anywhere but not being able to manage. It can lead to chaos or sprawl and so our investments in our management portfolio and this is from insights the redhead advanced cluster management to our cluster security capabilities or answerable. Our focus has been securing, managing and monitoring those environments so you can have a lot of them, you can run where you want, but she just sort of treat it as one thing. So you are our vision, how we've executed up to this point has really been centered around that. I think going forward where you'll see us um really try to focus is, you know, first you heard paul announced earlier that we're donating more than half a billion dollars to open. I would cloud research and part of this reason is uh running services. Cloud native services is changing. And that research element of open source is incredibly powerful. We want to make sure that's continuing but we're also going to evolve our portfolio to support this same drive a couple areas. I would call out, we're launching redhead open shift platform plus and I talked about that combination from rail to open shift to being able to manage it. We're really putting that in one package. So you have the advanced management. So if you have a huge suites of cloud native real estate there, you can manage that. And it also pushes security earlier into the application, build workflows. This is tied to some of our technology is bolstered by the stack rocks acquisition that we did. Being able to bring that in one product offering I think is really key to address security and management side. Uh we've also expanded Redhead insights beyond Rehl to include open shift and answerable and this is really targeted it. How do we make this easier? How do we let customers lean on our expertise? Not just for Lennox as a service, but expand that to all of the things you'll use in a hybrid cloud. And then of course we're going to keep pushing Lennox innovation, you'll see this with the latest version of red hat enterprise, like so we're gonna push barriers, lower barriers to entry. Uh But we're also going to be the innovation catalyst for new directions include things like edge computing. So hopefully that sort of helps in terms of where, where we started when it was just Lennox and then all the other pieces were bringing to the table and why and some new areas. Uh We're launching our investment going forward. >>Yeah, great, that's great overview. Thanks for taking the time to do that. I think one of the areas I that's jumping out at me is the uh, advanced cluster management work you guys are doing saw that with the security peace and also red hat insights I think is is another key one and you get to read that edge. But on the inside you mentioned at the top of this interview, data workloads pretty much being, I mean that pretty much everything, much more of an emphasis on data. Um, data in general but also, you know, serve abilities a hot area. You know, you guys run operating system so you know, in operating systems you need to have the data, understand what's being instrumented. You gotta know that you've got to have things instrument and now more than ever having the data is critical. So take us through your vision of insights and how that translates. Because he said mentions in answerable you're seeing a lot more innovations because Okay I got provisions everything that's great. Cloud and hybrid clouds. Good. Okay thumbs up everyone check the box and then all of a sudden day too As they call day two operations stuff starts to, you know, Get getting hairy, they start to break. Maybe some things are happening. So day two is essentially the ongoing operational stability of cloud native. You need insights, you need the data. If you don't have the data, you don't even know what's going on. You can't apply machine learning. It's kind of you if you don't get that flywheel going, you could be in trouble. Take me through your vision of data driven insights. >>Yeah. So I think it's it's two aspects. If you go to these traditional traditional sport models, we don't have a lot of insight until there's an issue and I'm always amazed by what our teams can understand fix, get customers through those and I think that's a lot of the success red hats had at the same note, we want to make that better where if you look at real as an example, if we fixed an issue for any customer on the planet of which we fix a lot in the support area, we can know whether you're going to hit that same issue or not in a lot of cases and so that linkage to be able to understand environments better. We can be very proactive of not just hey apply all the updates but without this one update, you risk a kernel panic, we know your environment, we see it, this is going to keep you out of that area. The second challenge with this is when things go do break or um are failing the ability to get that data. We want that to be the cleanest handshake possible. We don't want to. Those are always stressful times anyway for customers being able to get logs, get access so that our engineering knowledge, we can fix it. That's another key part. Uh when you extend this to environments like open shift things are changing faster than humans can respond in it. And so those traditional flows can really start to get strained or broken broken down with it. So when we have connected open shift clusters, our engineering teams can not only proactively monitor those because we know cooper net is really well. We understand operators really well. Uh we can get ahead of those issues and then use our support teams and capabilities to keep things from breaking. That's really our goals. Finding that balance where uh we're using our expertise in building the software to help customers stay stable instead of just being in a response mode when things break >>awesome. I think it's totally right on the money and data is critical in all this. I think the trust of having that partnership to know that this pattern recognition is gonna be applied from the environment and that's been hurting the cybersecurity market people. That's the biggest discussion I had with my friends and cyber is they don't share the data when they do, things are pretty obvious. Um, so that's good stuff there and then obviously notifications proactive before there's a cause or failure. Uh great stuff. This brings up a point that paul come here, said earlier, I want to get your reaction to this. He said every C. I. O. Is now a cloud operator. >>That's a pretty bold >>statement. I mean, that's simply means that it's all cloud all the time. You know? Again, we've been saying this on the queue for many years, cloud first, whatever people want to call it, >>what does that actually >>mean? Cloud operator, does that just mean everything's hybrid? Everything's multiple. Cloud. Take me through an unpacked what that actually means? >>Yeah. So I think for the C I O for a lot of times it was largely a technology choice. So that was sort of a choice available to them. And especially if you look at what public clouds have introduced, it's not just technology choice. You're not just picking Kafka anymore. For example, you really get to make the choice of do I want to differentiate my business by running it myself or is this just technology I want to consume and I'm going to consume a cloud, native service and other challenges come with that. It's an infrastructure, not in your control, but when you think about a ceo of the the axes they're making decisions on, there are more capabilities now and I think this is really crucial to let the C i O hone in on where they want to specialist, what do they want to consume, what do they really want to understand, differentiate and Ron? Um and to support this actually, so we're in this vein, we're going to be launching three new managed cloud services and our our focus is always going to be hybrid in these uh but we understand the importance of having managed cloud services that red hat is running not the customers in this case. So one of those will be red hat open shift streams for Patrick Kafka. We've talked about that, that data connectivity and the importance of it and really being able to connect apps across clouds across data centers using Kafka without having to push developers to really specialize in running. It is critical because that is your hybrid data, it's going to be generated on prim, it's going to be generated the edge, you need to be able to get access to it. The next challenge for us is once you have that data, what do you do with it? And we're launching a red hat open shift data science cloud service and this is going to be optimized for understanding the data that's brought in by streams. This doesn't matter whether it's an Ai service or business intelligence process and in this case you're going to see us leverage our ecosystem quite a bit because that last mile of AI workloads or models will often be completed with partners. But this is a really foundational service for us to get data in and then bring that into a workflow where you can understand it and then the last one for us is that red hat open shift api management and you can think of this is really the overseer of how apps are going to talk to services and these environments are complex, their dynamic and being able to provide that oversight up. How should my apps be consuming all these a. P. S, how should they be talking? How do I want to control? Um and understand that is really critical. So we're launching these, these three and it fits in that cloud operator use, we want to give three options where you might want to use Kafka and three Scale technologies and open data hub, which was the basis of open shift data sides, but you might not want to specialize in running them so we can run those for you and give you as a C. I. O. That choice of where you want to invest in running versus just using it. >>All right, we're here with matt Hicks whose executive vice president prospect technology at red hat, matt, your leader at red hat now part of IBM and continues to operate um in the red hat spirit, uh innovating out in the open, people are wearing their red hat uh hoodies, which has been great to see. Um I ask every executive this question because I really want to get the industry perspective on this. Um you know, necessity is the mother of invention as the saying goes and, you know, this pandemic was a challenge for many In 2020. And then as we're in 2021, some say that even in the fall we're gonna start to see a light at the end of the tunnel and then maybe back to real life in 2022. This has opened up huge visibility for CSOS and leaders and business in the enterprise to say, Hey, what's working, what do we need? We didn't prepare for everyone to be working at home. These were great challenges in 2020. Um, and and these will fuel the next innovations and achievements going forward. Um again necessity is the mother of all invention. Some projects are gonna be renewed and double down on some probably won't be as hybrid clouds and as open source continues to power through this, there's lessons to be learned, share your view on what um leaders in in business can do coming out of the pandemic to have a growth strategy and what can we learn from this pandemic from innovation and and how open source can power through this adversity. >>Yeah. You know, I think For as many challenging events we had in 2020, I think for myself at least, it it also made me realize what companies including ourselves can accomplish if we're really focused on that if we don't constrain our thinking too much, we saw projects that were supposed to take customers 18 months that they were finishing in weeks on it because that was what was required to survive. So I think part of it is um, 2020 broke a lot of complacency for us. We have to innovate to be able to put ourselves in a growth position. I hope that carries into 2021 that drives that urgency. When we look at open source technologies. I think the flexibility that it provides has been something that a lot of companies have needed in this. And that's whether it could be they're having to contract or expand and really having that moment of did the architectural choices, technology choices, will they let me respond in the way I need? Uh, I'm biased. But first I think open models, open source development Is the best basis to build. That gives you that flexibility. Um, and honestly, I am an optimist, but I look at 2021, I'm like, I'm excited to see what customers build on sort of the next wave of open innovation. I think his life sort of gets back to normal and we keep that driving innovation and people are able to collaborate more. I hope we'll see a explosion of innovation that comes out and I hope customers see the benefit of doing that on a open hybrid cloud model. >>No better time now than before. All the things are really kind of teed up and lined up to provide that innovation. Uh, great to have you on the cube. Take a quick second to explain to the folks watching in the community What is red hat 2021 about this year? And red hat someone, I'll see. We're virtual and we're gonna be back in a real life soon for the next event. What's the big takeaway this year for the red hat community and the community at large for red hat in context of the market? >>You know, I think redhead, you'll keep seeing us push open source based innovation. There's some really exciting spaces, whether that is getting closer and closer towards edge, which opens up incredible opportunities or providing that choice, even down to consumption model like cloud managed services. And it's in that drive to let customers have the tools to build the next incredible innovations for him. So, And that's what summit 2021 is going to be about for us, >>awesome And congratulations to, to the entire team for the donation to the academic community, Open cloud initiative. And these things are doing to promote this next generation of SRS and large cloud scale operators and developers. So congratulations on that props. >>Thanks john. >>Okay. Matt Hicks, executive vice president of products and technology. That red hat here on the Cube Cube coverage of red hat 2021 virtual. I'm John Ferrier. Thanks for watching. Yeah.

Published Date : Apr 28 2021

SUMMARY :

Great to see you. at the center of it always have been, we've been covering you for many years, interesting time and so now is really not going to be a fad or a new, you know, So take us through how you see kind at the edge and bring it back to locations where you might do more traditional processing, Lennox platform that even the new generation probably have to leave it load limits on the server anymore. Not just for Lennox as a service, but expand that to all of the things you'll use in a Thanks for taking the time to do that. this is going to keep you out of that area. having that partnership to know that this pattern recognition is gonna be applied from the environment I mean, that's simply means that it's all cloud all the time. Cloud operator, does that just mean everything's hybrid? it's going to be generated on prim, it's going to be generated the edge, you need to be able to get access the saying goes and, you know, this pandemic was a challenge for many In 2020. I think his life sort of gets back to normal and we keep that driving innovation and great to have you on the cube. And it's in that drive to let And these things are doing to promote this next generation of That red hat here on the Cube

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Matt HicksPERSON

0.99+

2020DATE

0.99+

LennoxORGANIZATION

0.99+

John FerrierPERSON

0.99+

2022DATE

0.99+

2019DATE

0.99+

2021DATE

0.99+

matt HicksPERSON

0.99+

Patrick KafkaPERSON

0.99+

johnPERSON

0.99+

threeQUANTITY

0.99+

18 monthsQUANTITY

0.99+

ArvinPERSON

0.99+

todayDATE

0.99+

second challengeQUANTITY

0.99+

two aspectsQUANTITY

0.99+

more than half a billion dollarsQUANTITY

0.99+

third partQUANTITY

0.98+

CSOSORGANIZATION

0.98+

one productQUANTITY

0.98+

paulPERSON

0.98+

firstQUANTITY

0.98+

KafkaTITLE

0.98+

RedheadORGANIZATION

0.98+

three optionsQUANTITY

0.98+

one packageQUANTITY

0.98+

Red HatORGANIZATION

0.97+

this yearDATE

0.96+

kernelTITLE

0.96+

one phaseQUANTITY

0.96+

decadesQUANTITY

0.96+

pandemicEVENT

0.95+

red hatORGANIZATION

0.95+

red hatORGANIZATION

0.95+

mattPERSON

0.95+

ceosORGANIZATION

0.93+

Red Hat Summit 2021EVENT

0.93+

CeosORGANIZATION

0.93+

red hat summit 2021EVENT

0.93+

oneQUANTITY

0.92+

Cube CubeCOMMERCIAL_ITEM

0.91+

one thingQUANTITY

0.9+

singleQUANTITY

0.89+

day twoQUANTITY

0.87+

secondQUANTITY

0.85+

last 20 yearsDATE

0.84+

waveEVENT

0.84+

three new managed cloud servicesQUANTITY

0.83+

RehlORGANIZATION

0.81+

KafkaPERSON

0.79+

couple years agoDATE

0.78+

Executive vice presidentPERSON

0.78+

ScaleTITLE

0.77+

red hatTITLE

0.76+

five GTITLE

0.75+

A Brief History of Quasi Adaptive NIZKs


 

>>Hello, everyone. This is not appropriate to lapse of America. I'm going to talk about the motivation. For zero knowledge goes back to the heart off, winding down identity, ownership, community and control. Much of photography exists today to support control communications among individuals in the one world. We also consider devices as extensions of individuals and corporations as communities. Here's hoping you're not fit in this picture. What defines the boundary off an individual is the ability to hold a secret with maybe, it says, attached to the ownership. Off some ethic, we want the ability to use the secret to prove ownership of this asset. However, giving up the secret itself essentially announced ownership since then, anybody else can do the same. Dear Knowledge gives us tools to prove ownership without revealing the secret. The notion of proving ownership off a digital object without revealing it sounds very paradoxical outside the model off. So it gives us a surprise when this motion was formalized and constructed by Goldwasser Miccoli and back off in the late eighties, we'll focus on the non interactive >>version of Siri, a knowledge our music in the >>stock, which was first developed by blow Tillman and Peggy, where the general it can span multiple rounds of communications music only allows a single message to be trusted. No, let's get into some technical details for musics. The objective of for music is to show that an object X, which you can think off as the public footprint, often asset, belonging clan and the language without revealing its witness. W, which you can think off as the Future Analytics team consists off three algorithms, video proof and very. The key generation process is executed by a trusted third party and the very opposite, resulting in a common >>random string, or steers, which is made public. The >>true vendor produces a proof by based on the CIA's X and the very fine with the checks. The proof against X and accepts or rejects music off course has to satisfy some properties. We needed to be correct, which basically says that when everyone follows the protocol correctly on, so we can expect, we need to be thought, which says that a false statement cannot be proven. The channel is a trickier properly to form this. How do we capture the intuition behind saying that the proof there is no knowledge of the witness. One way to capture that is to imagine their tools is the real world where the proof is calculated. Using the witness on there's a simulation worth where the proof is calculated without a witness. To make this possible, the simulator may have some extra information about the CIA's, which is independent off the objectives. The property then requires that it is not possible to effectively distinguish these words Now. It is especially challenging to construct music's compared to encryption signature schemes, in particular in signature schemes. The analog off the Hoover can use a secret, and in any case, the analog off the very fire can use a secret. But in is it's none of the crew layer and the verifier can hold a secret. Yeah, in this talk, I'm going to focus on linear subspace languages. This class is the basis of hardness. >>Assumptions like GH and deliver >>on has proved extremely useful in crypto constructions. This is how we express DD it and dealing as linear software. We will use additive notation on express the spirit logs as the near group actions on coop elements. You think the syntax we can write down Deitch on dealing Jupiter's very naturally a zoo witness sector times a constant electric so we can view the language as being penetrated by a constant language. Metrics really was hard by many groups in our instructions. What does it mean? S while uh, Standard group allows traditions and explain it off by in your group also allows one modification In such groups, we can state various in yourself facing elections. The DDN is the simplest one. It assumes that sampling a one dimensional space is indistinguishable from something full professional. The decisional linear assumption assumes the theme from tours is three dimensional spaces generalizing the sequence of Presumptions. The scaling the resumption asks to distinguish between gay damaged examples and full it and >>examples from a K plus one national space. >>Right, So I came up with a breakthrough. Is the construction in Europe 2008 in particular? There? Music for many years Off Spaces was the first efficient >>construction based on idiots and gear. Structurally, >>it consisted of two parts Our commitment to the witness Andre question proof part and going how the witness actually corresponds to the object. The number of elements in the proof is linear in the number >>of witnesses on the number of elements in the object. >>The question remains to build even shorter visits. The Sierras itself seemed to provide some scoop Rosa Russo fix. See how that works for an entire class of languages? Maybe there's a way to increase proof efficiency on the cost of having had Taylor Sierra's for each year. This is what motivates quality and after six, where we let the solace depend on the language itself. In particular, we didn't require the discrete logs of the language constants to generate this, Yes, but we did require this constant student generated from witness sample distributions. This still turns out to be sufficient for many applications. The construction achieved a perfect knowledge, which was universally in the sense that the simulator was independent. However, soundness is competition. So here's how the construction differed from roots high at a very high level, the language constants are embedded into the CIA s in such a way that the object functions as it's only so we end up not needing any separate commitment in the perfect sense. Our particular construction also needed fewer elements in the question proof, as there On the flip side, the CIA's blows up quadratic instead of constant. Let's get into the detail construction, which is actually present with this script. Let the language apparently trace by Giovanni tricks with the witness changing over time, we sat down and matrices >>D and B with appropriate damages. >>Then we construct the public series into what C. S. D is meant to be used. By the way. On it is constructed by >>multiplying the language matrix with D and being worse, Sierra's V is the part that is meant to be used by the very fair, and it is constructed using details be on be embedded in teaching. >>Now let's say you're asked to computer proof for a candidate X with fitness number we computed simply as a product of the witness with CSP. The verification of the truth is simply taking with the pairing off the candidate and the proof with the Sierras. Seeming threats is equal to zero. If you look carefully. Sierra's V essentially embedded in G to the kernel of the Matrix, owned by the language metrics here and so to speak. This is what is responsible for the correctness. The zero knowledge property is also straightforward, >>given the trapdoor matrices, D and B. Now, >>when corrected journalism relatively simple to prove proving illnesses strictly The central observation is that, given CSP, there is still enough entropy. >>India and me to >>random I seriously in particular Sierra's we Can we expand it to have an additional component with a random sample from the kernel allows it. This transformation is purely statistical. No, we essentially invented idiots are killing their talent in the era of kernel part in this transform sitting within show that an alleged proof on a bad candidate and we used to distinguish whether a subspace sample was used for a full space >>sample was used at the challenge. The need >>to have the kernel of the language in this city. That's the technical >>reason why we need the language to come from a witness. Sample. >>Uh, let's give a simple illustration >>of the system on a standard Diffie Hellman, which g one with the hardness assumption being idiot. >>So the language is defined by G one elements small D, E and F, with pupils off the phone due to the W. After that ugly, the CIA is is generated as follows example D and >>B from random on Compute Sierra speak as due to the day after the being verse and Sierra's V as G to do to do the big on day two of the video. The >>proof of the pupil >>detail that I do after the bill is computed using W. As Sierra Speed race to the party. I know that this is just a single element in the group. The verification is done by bearing the Cooper and the proof with the Sierras VMS and then checking in quality. The >>similar can easily compute the proof using trapdoors demand without knowing that what we are expecting. People leave a Peter's die and reduce the roof size, the constant under a given independent of the number of witnesses and object dimensions. Finally, at Cryptocurrency 14 we optimize the proof toe, one group >>element under the idiots. In both the works, the theorists was reduced to linear sites. The >>number of bearings needed for ratification was also industry in years. This is the crypto Ford in construction in action, the construction skeleton remains more or less the famous VR turkey. But the core observation was that many of the Sierras elements could were anomaly. Comite. While still >>maintaining some of this, these extra random items are depicted in red in this side. >>This round of combination of the Sierras elements resulted in a reduction of boat, Bruce says, as also the number of clearings required for education in Europe in 2015 kills, and we came up with a beautiful >>interpretation of skill sets based on the concept of small predictive hash functions. >>This slide is oversimplified but illustrated, wanting, uh, this system has four collecting >>puzzle pieces. The goodness of the language metrics okay again and a key Haider when >>the hidden version of the key is given publicly in the Sears. Now, when we have a good object, the pieces fit together nicely into detectable. However, when we have a bad object, the pieces no longer fit and it becomes >>infeasible to come up with convincing. Zero knowledge is demonstrable by giving the key to the simulator on observing that the key is independent of the language metrics. >>Through the years, we have extended >>enhanced not mind to be six system, especially with our collaborators, Masayuki Abby Koko Jr. Born on U. >>N. Based on your visits, we were able to construct very efficient, identity based encryption structure, resulting signatures >>public verifiable CCS, secure encryption, nine signatures, group signatures, authorities, key extremes and so on. >>It has also been gratifying to see the community make leaps and bounces ideas and also use queuing visits in practical limits. Before finishing off, I wanted to talk to you a little bit about >>some exciting activities going on Hyper ledger, which is relevant for photographers. Hyper >>Leisure is an open source community for enterprise. Great. It's hosted by the minute formation on enjoys participation from numerous industry groups. Uh, so difficult funded to efforts in Africa, we have versa, which is poised to be the crypto home for all. Blocking it and practice a platform for prospecting transactions are part of the legs on the slide here, >>we would love participation from entity inference. So >>that was a brief history of your analytics. Thanks for giving me the opportunity. And thanks for listening

Published Date : Sep 21 2020

SUMMARY :

an individual is the ability to hold a secret with maybe, it says, the public footprint, often asset, belonging clan and the language without The is it's none of the crew layer and the verifier can hold a secret. The scaling the resumption asks to distinguish between Is the construction in Europe 2008 construction based on idiots and gear. in the proof is linear in the number the discrete logs of the language constants to generate this, Yes, By the way. Sierra's V is the part that is meant to be used by the very fair, owned by the language metrics here and so to speak. The central observation is that, given CSP, there is still enough entropy. to distinguish whether a subspace sample was used for a full space The need That's the technical reason why we need the language to come from a witness. of the system on a standard Diffie Hellman, which g one with the hardness So the language is defined by G one elements small D, E and F, B from random on Compute Sierra speak as due to the day after the and the proof with the Sierras VMS and then checking in quality. similar can easily compute the proof using trapdoors demand without In both the works, the theorists was reduced to linear This is the crypto Ford in construction in action, the construction skeleton in this side. The goodness of the language metrics okay the hidden version of the key is given publicly in the Sears. giving the key to the simulator on observing that the key is independent enhanced not mind to be six system, especially with our collaborators, N. Based on your visits, we were able to construct very efficient, authorities, key extremes and so on. It has also been gratifying to see the community make leaps and bounces ideas and some exciting activities going on Hyper ledger, which is relevant for photographers. on the slide here, we would love participation from entity inference. Thanks for giving me the opportunity.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrucePERSON

0.99+

2015DATE

0.99+

AfricaLOCATION

0.99+

CIAORGANIZATION

0.99+

SiriTITLE

0.99+

EuropeLOCATION

0.99+

Masayuki Abby Koko Jr.PERSON

0.99+

each yearQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

GiovanniPERSON

0.99+

2008DATE

0.99+

kernelTITLE

0.99+

late eightiesDATE

0.99+

six systemQUANTITY

0.99+

two partsQUANTITY

0.98+

Goldwasser MiccoliPERSON

0.98+

AmericaLOCATION

0.98+

one worldQUANTITY

0.98+

PeterPERSON

0.98+

JupiterLOCATION

0.98+

single elementQUANTITY

0.97+

Diffie HellmanPERSON

0.97+

One wayQUANTITY

0.96+

nine signaturesQUANTITY

0.96+

todayDATE

0.95+

sixQUANTITY

0.93+

SierraTITLE

0.93+

SierraPERSON

0.93+

Rosa RussoPERSON

0.92+

PeggyPERSON

0.92+

SierrasLOCATION

0.91+

oneQUANTITY

0.9+

SearsORGANIZATION

0.89+

zeroQUANTITY

0.88+

one nationalQUANTITY

0.87+

Cryptocurrency 14ORGANIZATION

0.86+

single messageQUANTITY

0.84+

one groupQUANTITY

0.82+

CooperPERSON

0.81+

U.LOCATION

0.78+

day twoQUANTITY

0.76+

Zero knowledgeQUANTITY

0.72+

DeitchPERSON

0.71+

FordORGANIZATION

0.68+

AnalyticsORGANIZATION

0.67+

IndiaLOCATION

0.65+

blowPERSON

0.64+

TillmanPERSON

0.59+

TaylorORGANIZATION

0.55+

one elementsQUANTITY

0.52+

Hyper ledgerTITLE

0.5+

SierraORGANIZATION

0.5+

SierrasTITLE

0.39+

SierrasCOMMERCIAL_ITEM

0.36+

Manosiz Bhattacharyya, Nutanix | Global .NEXT Digital Experience 2020


 

>>from around the globe. It's the queue >>with coverage of the global dot Next digital experience brought to you by Nutanix I'm stew Minuteman. And this is the Cube's coverage of the Nutanix dot next conference This year it is the global dot next digital experience pulling together the events that they had had dispersed across the globe, bringing to you online and happy to welcome to the program. First time guest but a long time Nutanix engineering person, Nanosys Bhattacharya. He's the senior vice president of engineering at Nutanix. Mono is everyone calls him. Thanks so much for joining us. Thank you. All right. So, you know, you know, we've been doing the Cube since, you know, over 10 years now, I remember the early days of talking to Dheeraj and the team when we first bring him on the Cube. It was about taking some of the things that the hyper scale hours did. And bringing that to the enterprise was actually you know, one of the interesting components there dial back a bunch of flash was new to the enterprise, and we've looked at one of the suppliers that was supplying to some of the very largest companies in the world. Um, and also some of the companies in the enterprise, like fusion io. It was a new flash package. And that was something that in the early days Nutanix used before it kind of went to more, I guess commodity flash. But, you know, develop a lead developer, engineers that I talked to came from, you know, Facebook and Oracle and others. Because understanding that database in that underlying substrate to be able to create what is hyper converged infrastructure that people know, is there. So maybe we could start, Just give the audience a little bit. You know, you've been with Nutanix long time, your background and what it is that you and your team work on into inside the company. >>Yeah. Thank you. So, uh, so I think I come from a distributed systems for a long time. I've been working in Oracle for seven years, building parts of the exit data system, some off the convergence that databases have done with beauty in storage. You could see the same hyper convergence in other platforms like I do where computed storage was brought together. I think the Nutanix story was all about Can we get this hyper convergence work for all types of applications. And that was the vision of the company that whatever black home that these hyper scaler to build this big database companies and built, can this be provided for everybody? For all types of applications? I think that was the main goal. And I think we're inching our way, but surely and safely, I think we will be there pretty much. Every application will run on Nutanix states yet. >>Alright, well, and if you look at kind of the underlying code that enables your capability, one of the challenges always out there is, you know, I build a code base with the technology and the skill sets I have. But things changed. I was talking about flash adoption before a lot of changes that happened in the storage world. Compute has gone through a lot of architect role changes, software and location with clouds and the like. So it's just talk about that code base. You talk about building distributed systems. How does Nutanix make sure that that underlying code doesn't kind of, you know, the window doesn't close on how long it's going to be able to take advantage of new features and functionality. >>Yeah, I think Nutanix from the beginning. One thing that we have made sure is that you know, we could always give continuous innovation the choices that we make get like we actually separated the, you know, the concerns between storage and compute. We always had a controller vm running the storage. We actually made sure we could run all of the storage and user space. And over time, what has happened is, every time we abraded us off where people got, you know, faster performance, they get more secure, They've got more scalable. And that, I think, is the key sauce. It's all software. It's all software defined infrastructure on commodity hardware and in the commodity hardware can be anywhere. I mean, you could pretty much build it on a brand. And now that we see, you know, with the hyper scaler is coming on with bare metal is a service. We see hyper convergence as the platform of the infrastructure on which enterprises are willing to run their applications in the public club. I mean, look at new being vmc Nutanix clusters is getting a lot of traction. Even before I mean, we have just gone out a lot of customer excitement there on that is what I think is the is the true nature of Nutanix being a pure software play and cheating every hardware you know uniform and whether this is available in the public cloud or it's available in your own data center, the black at the storage or the hyper visor or the entire infrastructure software that we have that doesn't cheat. So I think in some ways we're talking about this new eight. See, I call the hybrid Cloud Infrastructure to 88. The hyper converge infrastructure becomes the substrate for the new hybrid cloud infrastructure. >>Yeah, definitely. It was a misconception for a number of years. Is people looked at the Nutanix solution and they thought appliance. So if I got a new generation of hardware, if I needed to choose a different harbor vendor? Nutanix is a software company. As you describe you, got some news announced here at the dot next show. When it comes to some of those underlying storage pieces, bring us through. You know, we always we go around to the events and, you know, companies like Intel and NVIDIA always standing up with next generation. I teased up a little bit that we talked about Flash. What's happening with envy me? Storage class memories. So what is it that's new for the Nutanix platform? >>Yeah, let me start a little bit, you know, on what we have done for the last maybe a year or so before, you know, important details off why we did it. And, you know, what are the advantages that customers might tap? So one thing that was happening, particularly for the last decade or so, is flash was moving on to faster and faster devices. I mean, three d cross point came in memory glass storage was coming in, so one thing that was very apparent Waas You know, this is something that we need to get ready for now. At this point, what has happened is that the price point that you know, these high end devices can be a pain has come where mass consumption can happen. I mean, anybody can actually get a bunch of these obtained drives at a pretty good price point and then put it in their servers and expected performance. I think the important thing is we build some of the architectural pieces that can enable they, uh the, uh enable us to leverage the performance that these devices get. And for that, I think let's start with one of the beginning. Things that we did was make sure that we have things like fine grain metadata so that, you know, you could get things like data locality. So the data that the compute would need but stay in the server that was very important part or one of the key tenets of our platform. And now, as these devices come on, we want to actually access them without going over the next. You know, in the in the very last year, we released a Construct Autonomous Extent store. So which is not only making data local, but also make sure metadata as well, having the ability to actually have hyper convergence where we can actually get data and metadata from the same server. It benefits all of these newer class storage devices because the faster the devices, you wanted to be closer to the compute because the cost of getting to the device actually adds up to the Layton's. He adds up to the application with for the storage in the latest. I would say this the dot Next, What we're announcing is two technologies. One is awful lot store, which is our own user file system. It's a completely user space file system that is available. We're replacing gets before we're all our You know, this drives which will then be in me and beyond on. And we're also announcing SPD K, which is basically a way for accessing these devices from user space. So now, with both of these combine, what we can do is we can actually make an Iot from start to finish all in user space without crossing the Colonel without doing a bunch of memory copies. And that gives us the performance that we need to really get the value out of these. You know, the high end devices and the performance is what our high end applications are looking for. And that is, I think, what the true value that we can add your customs. >>Yes. Oh man, if I If I understand that right, it's really that deconstruction, if you will, of how storage interacts with the application it used to be. It was the scuzzy stack when I used to think about the interface and how far I had to go. And you mentioned that performance and latency is so important here. So I was removing from, you know, what traditionally was disc either externally or internally, moving up to flash, moving up to things like Envy me. I really need to re architect things internally. And therefore, this is this is how you're solving it, creating higher io. Maybe if you could bring us inside. You know, I think high performance Iot and low latency s ap hana was one of the early use cases that that that everyone talked about that we had to re architect. What does this mean for those solutions? Any other kind of key applications that this is especially useful for? >>Yeah, I think all the high end demanding applications talk about smp, Hana allow the healthcare applications. Look at epic meditate. Look at the high end data basis because we already run a bunch of databases, but the highest and databases still are not running on a C. I. I think this technology will enable you know the most demanding oracle or Sequels. Of course, Chris, you know all the analytics applications they will now be running on a CSO. The dream that we had every application, whatever it is, they can run on the C I. A platform that can become a reality. And that is what we're really looking forward to it. So our customers don't have to go to three year for anything. If if if. If it is an application that you want to run a CEO is the best platform for your application that is working what you want. >>Alright, So let me make sure I understand this because while this is a software update, this is leveraging underlying new hardware components that are there. I'm not taking a three year old server on to do this. Can you help understand? You know, what do they need to buy to be able to enable this type of solution? >>So I think the best thing is we already came up with the all envy. Any platform and everything beyond that is software change. Everything that we are is just available on an upgrade. So of course you need a basic platform which actually has the high end devices themselves, which we have hard for a year or so But the good thing about Nutanix is once you upgrade, it's like a Tesla you know you have. But once you get that software upgrade, you get that boosted performance. So you don't need to go and buy new hardware again. As long as you have the required devices, you get the performance just by upgrading it to the new the new version of the air soft. I think that is one of the things that we have done forever. I mean, every time we have upgraded, you will see. Over the years, our performance is increased and very seldom has a pastoral required to change. You know their internal hardware to get the performance. Now, another thing that we have is we support heterogeneous clusters. So on your existing cluster, let's say that you're running on flash and you want to get you all. And maybe you can add nodes, you know, which are all envy me and get the performance on those notes. While these flash can take the non critical pieces which is not requiring you to understand performance but still give you the density off water. VD I are maybe a general server virtualization. While these notes can take into account the highest on databases or highest analytic applications, so the same cluster and slowly expand to actually take this opportunity of applications on >>Yeah, thats this is such an important point We had identified very early on. When you move to HV I. Hopefully, that should be the last time that you need to do a migration any time. Anybody that has dealt with storage moving from one generation to the next or even moving frames can be so challenging. Once you're in that cool, you can upgrade code. You can add new nodes. You can balance things out. So it's such an important point there. UH, you stated earlier. The underlying A OS is now built very much for that hybrid cloud world. You talk about things like clusters that you have now have the announcement with AWS now that they have their bare metal certain service. So do we feel we're getting a balancing out of what's available for customers, whether it's in their own data center in a hosted environment where they have it, or the public cloud to take capabilities like you were talking about with the new storage class? >>Yeah, I think I see most of these public clouds are already providing you, uh, hardware which hasn't being built in which I'm sure in the future we have storage class memory building. So all the enterprise applications that were running on prim with the latency guarantees, you know, with the performance and throughput guarantees can be available in the public cloud, too. And I think that is a very critical thing. Because today, when you lift and shift, one of the biggest problems that all their customers face is when you're in the cloud, you find that enterprise applications are not built for it, so they have to either really protect it or they have to make, you know, using a new cloud native constructs. And in this model, you can use the bare metal service and run the enterprise applications in exactly the same way as you would run in your private data center. And that is a key tell, because now, with this 100 our data mobility framework where we can actually take both storage and applications, you know do lose them a trust public and the private cloud we now have the ability to actually control on application end to end. A customer can choose Now that they want to run it, they don't have to think. Oh, yeah? I have to move to that. Have to be architected. You can choose the cloud and run it in the panel service exactly as you were honoring your private data center. You've been utilizing things like Nutanix clusters. >>Great, well mannered. Last last question I have for you. You know, we really dug down into some of the architectural underpinnings in some of the pieces inside the box. Bring it back up high level, if you would, from a customer standpoint, key things that they should be understanding that Nutanix is giving them with all of these new capabilities. You mentioned the block store and the SPK. >>Yeah, I think for the customer, the biggest advantage is that the platform that they chose for you know, you see, some of virtualization can be used for the most demanding workloads. They're free to use, you know, Nutanix for smp, Hana for high end Oracle databases, Big data validates they can actually use it for all the healthcare apps that I mentioned epic and meditate and at the same time, keep the investment and hardware that they already have. So I think the fact about this Tesla kernel analogy that we always think is so act with Nutanix. I think with the same hardware, uh, investment that they have done with this new architecture. They can actually start leveraging that and utilize it for more and more, you know, demanding workloads. I think that is the key advantages. Without changing your you know, the appliances or your san or your servers, you get the benefit of running the most demanding applications. >>Well, congratulations to you and the team. Thanks so much for sharing all the updates here. Alright. And stay tuned for more coverage from the Nutanix global dot Next digital experience. I'm stew minimum. And as always, Thank you for watching the Cube. >>Yeah, Yeah, yeah, yeah, yeah

Published Date : Sep 9 2020

SUMMARY :

It's the queue And bringing that to the enterprise was actually you know, one of the interesting components there dial I think the Nutanix story was all about Can we get this hyper convergence one of the challenges always out there is, you know, I build a code base with the technology and One thing that we have made sure is that you know, you know, companies like Intel and NVIDIA always standing up with next generation. At this point, what has happened is that the price point that you know, these high end devices So I was removing from, you know, what traditionally was disc either externally I. I think this technology will enable you know the most demanding oracle or Sequels. Can you help understand? I mean, every time we have upgraded, you will see. You talk about things like clusters that you have now have the announcement with AWS that were running on prim with the latency guarantees, you know, Bring it back up high level, if you would, from a customer standpoint, key things that they should be understanding They're free to use, you know, Well, congratulations to you and the team.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

NVIDIAORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DheerajPERSON

0.99+

TeslaORGANIZATION

0.99+

three yearQUANTITY

0.99+

Nanosys BhattacharyaPERSON

0.99+

seven yearsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

IntelORGANIZATION

0.99+

three yearQUANTITY

0.99+

bothQUANTITY

0.99+

OracleORGANIZATION

0.99+

Manosiz BhattacharyyaPERSON

0.99+

two technologiesQUANTITY

0.99+

OneQUANTITY

0.98+

last yearDATE

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

over 10 yearsQUANTITY

0.98+

First timeQUANTITY

0.98+

2020DATE

0.96+

firstQUANTITY

0.96+

one thingQUANTITY

0.96+

one generationQUANTITY

0.96+

a yearQUANTITY

0.95+

This yearDATE

0.93+

last decadeDATE

0.93+

CubeCOMMERCIAL_ITEM

0.9+

100QUANTITY

0.9+

CTITLE

0.9+

eightQUANTITY

0.89+

stew MinutemanPERSON

0.85+

One thingQUANTITY

0.85+

KOTHER

0.82+

GlobalEVENT

0.75+

yearDATE

0.7+

MonoPERSON

0.66+

dotEVENT

0.66+

LaytonORGANIZATION

0.65+

vmc NutanixORGANIZATION

0.63+

88OTHER

0.6+

lastDATE

0.6+

thingsQUANTITY

0.59+

NutanixTITLE

0.57+

smpTITLE

0.56+

suppliersQUANTITY

0.53+

ConstructORGANIZATION

0.52+

Autonomous ExtentTITLE

0.52+

.NEXT Digital ExperienceEVENT

0.51+

IotTITLE

0.51+

HanaORGANIZATION

0.48+

CubeORGANIZATION

0.47+

HanaTITLE

0.46+

kernelTITLE

0.44+

SPKTITLE

0.39+

Next Level Network Experience Closer V1


 

>> Narrator: From around the globe, It's the CUBE with digital coverage of next level network experience event. (upbeat music) Brought to you by Infoblox. >> Everyone welcome back to the CUBE's coverage and co-hosting of the Infoblox next level networking experience virtual event. With a pop up event, only a few hours, but four great segments. Officer Stu Miniman helped me kick it off this morning, and Stu, I want to bring you in, Stu Miniman who's the... He host for the CUBE, covering networking with me Stu we do all the cloud native shows. We can, we can smell what's relevant, and I want to get your take on this, because, Infoblox putting out some pretty good content with some great guests. But, next level networking, let's just unpack that, next level networking and next level networking experience. The word experience changes the context of that definition, because going the next level with networking is one thing, having an experience is another, just what's your take, you seen, we talk about this all the time, what's your take? >> Yeah, so John, one of the words that we've talked about so much is, how do we simplify this environment? Networking is known for its complexity. Too often, it's, stuck down in protocols and just the arcane arts that I don't want to think about. Networking at its best, is just going to work. And I don't want to think about it, so, if I'm adopting SaaS models, if I'm going cloud native, it should, tie into everything else we're doing. What I was hearing, the themes, John, and the interviews you discussed, they're talking about SaaS, they're talking about cloud native, things like visibility, moving real time, really changes so much of these environments, so, IP addresses used to be a lot more static. We know now, things just change constantly and that's one of the big challenges. How do I monitor that environment? How do I keep them secure? And that's where modern environments need to go to the next level to be able to keep up with all of those changes. >> The word experience means something to me in a sense, I think contemporary, right? I think something new, relevant and cool, and still we're old enough to remember the '80s and '90s, and I was coming out of college late '80s, and I remember I never had a punch, I never did any program with a punch card. I was kind of the young gun, coming into the workforce with a technical degree, and I remember looking at the mainframe guys going, "who are those old relics?" And they, those guys hung onto their job as long as they could, and the smart ones moved and said, "Hey, I'm going to jump on this mini computer bandwagon, Oh, there's inter networking and local area networking that the PC toys are attaching to, that's interesting." And so you had a migration of systems talent move to the new, the new way. Some didn't, and I look at that and I say, hmm, that's similar to what's going on in networking, if you're the old networking guy or gal, and you're hugging onto the router, or you're hugging onto that old way, you could be extinct, because there is a new experience coming. It's programmable, it's automation, it's different. It's not, the big, old way, similar to the mainframe. So, a lot of psychology in this networking industry right now is, and the young people come in. It's like, why we do it that way? This to me is about next level networking, experience. Your reaction to that. >> Yeah, well, John, it's been interesting here in 2020, you talk about the acceleration of things moving, people that were dipping their toe in cloud and have to move in a matter of weeks, if not, hours and days to get things up and running. So, leveraging software, open source is a big component of what a lot of companies are doing, and of course, cloud and that cloud experience means in the public cloud and edge environments, you talked a bit about IOT in some of these cases, the order of magnitude of networking challenges that are out there are such that I have to have automation, it needs to be simpler because I could not do things the manual old way. John, I lived through so many generations, you work with people in the networking, it's manually done. It was done via CLI, because I knew how to do it. Maybe I did some scripting, but in today's day and era, things change too fast and the amount of work that needs to be done is so much so that that's why automation needs to be front and center. And you see Infoblox, as some of their new solutions, especially leveraging SnapRoute take advantage of the modern way that people need to do things. >> Well, we actually did a deep dive on SnapRoute and it was super impressive, again, I thought it was way too early, but they were doing some stuff with Kubernetes thinking, just thinking like Linux kernel, low level thinking. And I think Stu, this is what I want to get your thoughts on, because in the industry we cover Cisco aggressively. We saw them by open DNS, manage services versus low level, we got automation, you got Amazon out there, I mean, hell I can just have a screen that goes in and manages my DNS in the cloud, I can start thinking differently about how I wire my services together, if I think about Amazon, for instance, or hybrid and multicloud, this a whole new level of thinking. And, these are going to be new solutions, and this is the theme that came up and it's come up across every single major vendor, whether we're talking the Google cause they have a pretty damn good network. You got Cisco, you've got, all these people out there, they got to reinvent themselves. And, new expectations require new solutions. This has been something that's clearly coming out of the COVID, that, you know what I like working from home, I'm more productive. We don't need the real estate costs, wait, why do we even need a VPN? Why we over-provisioned? What are we paying for? Let's just build and secure. So again, all these projects are going to come out of the woodwork, I think that they're going to create a new vendor, a new brand or new opportunity because, these new solutions need to come because of the demand has been highlighted by COVID and other cloud scale. What's your thoughts on that, because this may not be your grandfather's networking company that comes out of the woodwork, It might be a cloud app. >> Yeah, well John, first of all, I think you nailed it. You look at a company like Infoblox, founded back in the .com era, back in 1999 and dominant in their space. So, they're not here saying, oh, we're the tried and trusted company that you work with, and you shouldn't try that new Fangled, Kubernetes piece or anything like that. It's not ready for prime time. As you said, they're getting, they're looking to skate where, to where the pack is going, they're aggressively going after these environments to make sure that they maintain their leadership in this environment. And, you're absolutely right, for the longest time, generally in networking, you were talking about, it was Cisco and everybody else out there, but now the cloud is such a big piece of what's going on, we've seen chip acquisitions by the big Hyperscalers, we've seen how they build their environments, and in many ways there's been consolidation, but there's also been dis-aggregation. So, the fundamental layer, but like what Infoblox has with their DDI stack, is something that customers need, I need to make sure my identity and my IP is something that I can manage wherever I am in all of these environment. >> It's funny Stu, we joke about SD-WAN, and now that's the internet and you think about the internet, one constant in all of it is you got to move packets from point a to point B and store a packet in a storage device, and ultimately you need to have to resolve addresses. And DNS, as old as it is, is fundamentally the standard, and a lot of people take it for granted, so to me, DNS has survived. It's a low level building block, but as things evolve, new abstraction layers come up, and I think we'll see more. I mean, I think there'll be a new naming system on how to deal with different scale across multicloud. And I think, Amazon is talking about it. We hear Ava Trix talking about it, we hear, things going on within Google talking about it, so, I think you're going to start to see new levels of innovation because, that's where the packets are moving, that's what the bad guys are, and you can't cover your footprints if you're trying to get in there. So, huge change is coming will be on it, And the CUBE we'll be monitoring it, as always, we can see the waves coming, Stu, what do you see? What's your future ball, tell you, as we come out of COVID, networking world, cloud collision, multicloud, apps, microservices, all this massive wave, what's your take, What's going to happen? >> Well yeah John, we've talked so much, It's those builders out there, how do I make sure that I can build my application, allow my users to access things wherever they are. The shift we hear for post COVID, it goes from work from home to work from anywhere. So, we were not going to see everybody just go back to the pre COVID era, this will have a lasting impact, and especially from a networking standpoint, we were starting to look at how does 5G and IOT change the way we think of networking? This just accelerates what we Needed to look at. Some networking technologies, take a long time to go through their maturation and standards, but being able to manage my entire environment, be able to spin up my new applications, and as you said John, DNS, like identity is something that is a fundamental piece that I need to make sure is rock solid so that I can get my employees access to the information while still keep things secure. >> Well, when you click on a link, that's malware, that's DNS, so this is where the action is, and people got to preserve it. Stu, We're going to be covering it, we're going to be watching all the waves, and again, this the CUBE on top of the big wave of networking and as networking evolves, I just still, I just still think, it's one big IOT world now, and it's an internet of things. They're all connected, there's no perimeter, it's borderless. This is going to change the game. I think in the next 18 months, we're going to see really different connected experiences and whoever can deliver them, will be the winner. Of course, we'll be watching it, go to siliconangle.com. We have a special report on next gen networking, Rob hope from Paul Gillin are constantly reporting, Stu has been getting a ton of great interviews, and again, we're getting the stories out, during COVID-19, with our remote interviews. Thanks for watching the CUBE, for the special next level networking experience event by Infoblox. (upbeat music)

Published Date : Jul 23 2020

SUMMARY :

Brought to you by Infoblox. and co-hosting of the Infoblox and the interviews you discussed, and said, "Hey, I'm going to jump on and have to move in a matter of weeks, because in the industry we I need to make sure my identity and my IP and now that's the internet and standards, but being able to manage and people got to preserve it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Paul GillinPERSON

0.99+

InfobloxORGANIZATION

0.99+

2020DATE

0.99+

GoogleORGANIZATION

0.99+

1999DATE

0.99+

Stu MinimanPERSON

0.99+

StuPERSON

0.99+

oneQUANTITY

0.99+

CUBEORGANIZATION

0.98+

siliconangle.comOTHER

0.98+

SnapRouteTITLE

0.97+

RobPERSON

0.97+

late '80sDATE

0.97+

todayDATE

0.96+

COVID-19OTHER

0.95+

Linux kernelTITLE

0.94+

bigEVENT

0.91+

Ava TrixPERSON

0.89+

next 18 monthsDATE

0.88+

COVIDEVENT

0.87+

this morningDATE

0.85+

one thingQUANTITY

0.85+

'80sDATE

0.75+

FangledORGANIZATION

0.72+

four great segmentsQUANTITY

0.72+

CLITITLE

0.71+

OfficerPERSON

0.7+

'90sDATE

0.68+

COVIDTITLE

0.58+

hoursQUANTITY

0.52+

KubernetesORGANIZATION

0.51+

singleQUANTITY

0.49+

waveEVENT

0.44+

CUBETITLE

0.44+

Wim Coekaerts, Oracle | CUBE Conversation, May 2020


 

>> From theCUBE studios in Palo Alto and Boston, connecting with thought-leaders all around the world, this is a Cube Conversation. >> Hi everybody, this is Dave Vellante. Welcome to this Cube Conversation. We're really excited to have Wim Coekaerts in, he is the senior vice-president of software development at Oracle. Wim, it's great to have you on, and, you know I often say I wish we were face-to-face but if we were you'd have to cut off my tie, cause developers and ties just don't go together. >> No, I know, and this is my normal outfit, so this is me wherever I go. Hi again, good to see you. >> Yeah, great to see you. So, of course, you know a lot of people are confused about Oracle, and open-source, they say "Oracle? Open-source? What is that all about?" But I think you're misunderstood. People don't, first of all, realize you as the leader of the software-development community inside of Oracle, I mean, you've been involved in Linux since the early 90s. But you guys have a lot of committers, you do a lot. I want to talk about that. What is up with Oracle, and open-source? >> Ah, well, it's a broad question. So, you know, a couple of things. One is, we have many different areas within the company that are dealing with open-source. So we have the cloud team doing a lot of stuff around cloud SDKs and support for different languages like Python and Go, and of course Java and so forth, so they do a lot around ensuring that the Oracle ecosystem is integrated in the open-source tools that customers use, or developers use, Terraform companies and so forth. And then you have the Java team, and so forth. Java is open-source and then the Graal project, GraalVM which is a polyglot compiler that can run Java, and Python, and Javascript and so forth together in one. VM do really cool optimizations, that's an open-source project, also on GitHub. There's of course MySQL, which is along with Java, they're probably the two most popular and widely used open-source projects out there. There's VirtualBox which is of course also a very popular project that's open-source. There's all the work we do around Linux. And I think one of the things is that, when you have so many different areas, doing things that are for that area, then as a developer or as a customer, you typically just deal with that group. And what you see is, oh you're talking to the Java developers, so you know what's going on around Java. The Java developers might not necessarily say, "Oh well we also do MySQL, and we do Linux and VirtualBox and so forth," and so you get a rather myopic, narrow view of the larger company. When you add all these things up, and there will be one big slide that says "This is Oracle, these are all these open source projects," and there's multiple ways. One is, we have projects that we've open-sourced and all the code came from us and we made it publicly available, we're the main contributor and we get contributions back. There are other projects where we contribute to third-party in terms of enhancing things, like I said with the Cloud Team, and then in general something like Linux where we're part of an external project and we participate in development of that project at large. And so there's these three different ways, when you count up all the developers that we have that deal with open-source on a daily basis. And in terms of contributions, in terms of bug fixes, testing, and so forth, it's thousands, literally, full-time paid developers. And of course, all the projects are all either on GitHub or similar sites that are very popular. So yeah, I think the misunderstood is probably a lack of knowledge of the breadth of what we do. And, you know, our primary goal is to provide services and products to customers, and so the open-source part is sort of embedded in a development methodology. But that's not something we sell or market separately, we just work with customers and products and services, and so in some cases it's not well-understood. >> Yeah. Well, we're talking of course, we're talking about the state of the penguin, I think it's important for people to understand, Oracle got into the Linux game in the 90s, maybe the latter part of the 90s and Oracle, of course, wants to make Linux-- wants to make Oracle, it's applications and database run better on Linux, but as you're pointing out, your Linux distro, full support, end-to-end, thousands of people in your open-source community, and the contributions that you make to Linux, many if not most, they go upstream, everybody can benefit from those, but of course you want an Oracle distro that is going to make Oracle stuff run better, that's always kind of been the Oracle way. >> Well, so, yes, two things though. One is, so everything we do is upstream. So we have no Linux patches that are not contributed upstream; There's no proprietary code in Oracle Linux at all, it's all completely open, publicly available: the source code, the change log, all the commits, it's fully open and public, which sometimes is not well-understood, but it's completely open. And, everything we do in terms of feature development or functionality or bug fixes goes upstream to the Linux kernel mail-list. It's actually, it's the only way to be able to manage a Linux distribution and be a Linux vendor is to live in that eco-system. Otherwise, the cost of maintaining your own fork, so to speak, is very high and it doesn't really solve the problem. Now, the functionality we work on obviously is focused on making Oracle products run better, making Oracle Cloud run better, and so forth. However, again, what's important to understand, though, is an Oracle database is a program running on an operating system. It does IO, it does networking, it deals with memory management, lots of processing. So, for the most part, the things that we work on to improve that helps everyone out, right? It helps every other database run better, or helps every other language run better. So none of these changes are specific to Oracle, they're just things that we found doing performance benchmarks and testing and so forth, where we say "Hey, if Linux did the following, it would make boot-up faster. Now boot-up has nothing to do with the database. But our customers run on 1-terabyte, 4-terabyte, 8-terabyte systems, and so booting up, and Linux starting up, and cleaning up memory takes a long time. So we want to reduce that from an availability point of view. So here, we're now talking about just enterprise for you. So there's this broad set of things we work on that definitely help us, but they're actually really completely generic and help everyone out. >> Yeah, that's great. So I wanted to kind of get that out of the way and help our audience understand that. So let's get into it a little bit; What are you seeing, what's going on in IT, pick your observation space and your vision of what you see happening out there. >> Well, you know, it's very interesting, it's sort of, there's two... there's sort of two worlds, right, there's the cloud world and the move to cloud, and there's the on-premises world, where people run their systems on their own. And, one of the things that we've learned is, when you talk about machine-learning, obviously, is something that's very popular these days, and automation. And so in order to rely on machine-learning well, and have algorithms that are very effective, you need lots of data. And so being a cloud vendor, and having Linux in our cloud on tens of thousands, or hundreds of thousands of servers, or more, allows us to have a view of how an operating system works across an incredibly large scale. So we get lots of data. And so for us to figure out which algorithms work well in terms of how can we do network optimizations, how can we discover anomalies on the storage site, and deal with it and so forth, we can do that at scale. And what's interesting is, how do we then bring that on-prem? Well, if we can get the data and the learning done, the training done, in our cloud directly, then when we provide that service also for people running Oracle Linux on premises then that will work. The alternative is to have point solutions where you provide something to a customer, and he needs to learn something from small amounts of data. That doesn't work so well. So I think having both worlds, on-prem and cloud directly, allows us to kind of benefit from that. And I think that's important, because lots of customers are interested in going to cloud. Many of the enterprises have not yet. You know, they're starting, but there's still a huge on-premises space that's important. And so by being able to get them familiar with how these things work at scale, autonomy is again important, right, Autonomous Database is incredibly popular and so forth, that allows us to then say, "Here, try these things out here, it's a service. We can show you the benefits right away," and then as that improves we bring that, to a certain extent, on-premises as well. And then they can have it in both places. And that, I think, is something, again, that's relatively unique but also very important, is that we want to provide services and products that act similarly on-premises as well as in cloud, because at some point when people move we want to make that transition seamless. And what you have today for the most part is one world that's on-prem, and then the cloud world is completely different. And that is a big barrier of moving, and so we want to reduce that, we can run the same operating system local as well as cloud, you can the same functionality, and then that helps transition people over much easier. >> Yeah, well Oracle actually was one of the -- I think Oracle was the first company to actually market same-same, you actually used that term. Others put forth that concept, but Oracle was the first to announce products like Cloud at Customer, that were same-same, now it took some time to actually get it perfected, and get it to market, but the point is, and we've written about this, is Oracle, because of the ascendancy of cloud, flipped and has a cloud-first mentality, and you just kind of referenced that, you just said, "And you can bring that to on-prem." So I wonder if you could talk about that cloud-first mentality, and the impact on hybrid. >> So yeah, I think the cloud-first part is of course in cloud we work on services moreso than products that we deliver. And there's a number of things that are happening. So one is that we obviously continue to provide products to customers, you can download Oracle Linux, you can download the database and what not, you can install it on your own, you can do the traditional way of working. Then in the cloud-world, what typically happens is "Oh, I use a database service. I'm not installing anything, I push a button and I get an IP address and a SQL that connects extremely quickly to the database." And we take care of everything underneath that is on this database. Now, in order to do that, you need a whole infrastructure in place, you need log-in agents, you need a back-end that captures all that stuff, you need monitoring tools, you need all the automation scripts for bringing the service up and monitor it. And so, that takes a lot of time to do right, and we learn a lot by doing this. And so the cloud-first part of these services means that we get to experience this ourselves with direct access to everything. Now taking that service with all of the additional features like autonomy, and bringing that to an on-premises world, we have to make sure we can package that so that all these pieces around it go along with it. And that takes a little bit more time, so we can do everything at the same time. And so what we've done with Autonomous Database is we created everything in Oracle Cloud, we have the whole system running really well, and then we've been able to sort of package that and shrink it into something that can be installed on-premises, but then connected into Oracle Cloud again. And so that way we can get all the telemetry over the metric, and that allows us to scale. Because part of providing a cloud service that runs on-prem in the customer environment is that we need to be able to remotely manage that similar to how that runs in our own cloud. Right, otherwise it doesn't scale. And so that takes a little bit of time, but we've done all that work, and now with Cloud at Customer Database that's really in place. >> Yeah, you really want to have that same cloud experience, whether with on-prem, in the public cloud, hybrid, et cetera. So, I want to explore a little bit more who is using Oracle Linux, and what's the driver for using it. Can you describe maybe some of the types of customers and why they buy? >> Sure, so we started this fourteen years ago, in 2006, October 25th, 2006. I remember that day very well; Penguins on stage and a big launch for Oracle Linux in San Francisco Moscone Center. So, look, the initial driver for Oracle Linux was to ensure that Oracle database customers or Oracle product customers had a good operating system experience, and the ability to be able to handle critical issues when that occurs, because typically a database runs the company's critical data: the most essential stuff that a company has is typically in a database, an Oracle database. And so when that thing has issues with the operating system, you don't want just to talk to multiple vendors and have finger-pointing, and having to explain to an operating system vendor how the database works. In the Unix world, we had a good relationship with the OS vendors, and the hardware vendors, they were the same. And they knew our products really well, and in the Linux world, that was very different. The OS vendor basically did not want to understand or learn anything about the products living on top. And so while to a certain extent that makes sense, it's an enterprise world where time is of the essence, and downtime needs to be limited absolutely. We can't have these arguments and such. And that was the driver, initially, for doing Oracle Linux. It was to ensure there was a Linux distribution really backed by us, that we could fix, that we could fully support. That was completely the original intent. And so the early customer base was database customers. Database and middleware. Mostly database. But that has then evolved quickly, and so what happened was, people say "Look, I have a thousand servers, a hundred run Oracle, so we'll run Oracle Linux on those hundred, and we'll run something else on those other nine-hundred." Now after a year or so, they realize that our support is really good; We fix all these issues, and so then they're like "Why are we having two Linux distributions? This thing works really well, it runs any application, it's fully compatible, so we'll do a thousand with Oracle Linux." And so the early days, the first few years, was definitely Oracle Database as the core driver, and then it sort of expanded to the rest of the estate. And over the years, we've added lots of features and functionality, like Ksplice, and so forth. We have an attractive pricing model for running on servers, and so now lots of our customers have a very small Oracle percentage running and many other things running. So it's really become a all-or-nothing play in the Linux space, and we're well-known now, so it's actually very good. >> You just mentioned Ksplice. We've been talking about cloud, and on-prem, and hybrid. Let's talk about security, because security really is a differentiator, particularly if you're going to start to put stuff in the cloud. Talk about Ksplice specifically, but generally security and your policy there. >> So, "Security first" is sort of what you hear us say and do, in everything we do. The database obviously security, on the Linux site security matters. Ksplice as a technology is there to do critical bug-fixing and make sure that we can apply security vulnerability fixes without affecting the customer, and not have downtime. And if you look at most of the cases or many of the cases where you have security vulnerabilities and exploits, it tends to be because systems were not patched. Why were they not patched? Well not that our customer doesn't understand that it's important, but it's a whole train of events that needs to happen. You have to, you get notified that there's a security issue in your operating system or application. Then, well, an application typically means it's a multi-layered setup. So if you have to bring your database server down, then you first have to coordinate with the application users to bring the app server down, cause that talks to the database. So to patch one system, you basically have to bring down the whole application stack. You have to negotiate with the DBAs, you have to negotiate with the app admins, you have to negotiate with the user. It takes weeks to do that and find time. Well during that time, you're vulnerable. So the only way, really, to address security in a scalable and reducing that window of time is to do it without affecting the customer. And so Casewise is something that, it's a company we acquired in 2009, and have since evolved in terms of capabilities, and so it allows us to patch the Linux terminal without downtime. We lock the kernel for 8 microseconds. It's literally no downtime. You don't have to bring down applications, the user doesn't see it, there's no hang, there's no delay. And so by doing that, you can run a Linux operating system, or gLinux, and you can be fully patched on a system that hasn't rebooted for 3 years. You don't even know it. And so by doing that type of stuff, it makes customers more secure, and it avoids them-- It saves them a lot of money in terms of dealing with project management and so forth, but it really keeps them secure. And so we do that for the Linux kernel, we do that for some of the libraries on top that are critical like OpenSSL and 2 LVC, and, you know one example-- I can give you two examples. So one example is, Heartbleed was this bug in OpenSSL a number of years ago. And so everyone had to patch their SSH server. And that meant, basically, systems around the world had to reboot. Like a whole IT reboot across the world. With Ksplice today, if Heartbleed were to happen tomorrow, we would be able to patch this online for all the Oracle Linux customers without any downtime. No reboots, no restarting of applications, everything keeps running. The amount of money saved would be massive, and also, of course, the headache. Another example is, and this was in Oracle Cloud, when some of these CPU bugs that happened a few years ago that were rather damaging on the cloud side, where you could basically see memory potentially of other CPUs running, the cloud is incredibly critical. We were basically able to basically patch our entire cloud in four hours. And the customer didn't know, right, a hundred and twenty million patches, or something, that we applied within four hours, all online, without any downtime. And so that technology has been really helpful, both for us to run our cloud, but the exact same patches and same fixes go to customers on-premises as well. But this comes back to the whole, what we do in cloud we also do for customer. And I think that's a unique thing that we have at Oracle which is quite fascinating. The operating system we run for our customers, the operating system that's the host part of VMs, is the exact same binary and source code that we make available, just to be clear, the exact same binaries are the ones that you run as a customer on-premises. So if you run Oracle Linux with KVM, you run VMs, you're actually running the exact same stuff as we run underneath our customer's stuff. Nobody else does that, everyone else has a black box. So I think that helps a little bit with transparency as well. >> Yeah, and that homogeneity just creates an environment, you're talking about that sort of security mindset, it's critical, you're not just bolting it on, it's part of the culture. But you started your career, and then of course you were a Linux person when you came to Oracle, but then I think you spent some time in database, back in the day when there were serious database wars going on, before Oracle became the king of database. So now you've got, obviously, this great portfolio, and a lot of really sharp software developers; What should we expect going forward, from Oracle? What should we look for? >> You know, I was talking to some, I was welcoming some interns to the company, for their summer internship yesterday, and one of the things I mentioned to them was that -- so cloud obviously gives us a lot of opportunities, but there's a number of things. One is, we have such a breadth of applications and software and hardware together. We have the servers, we have the storage, we have the operating systems, we have the database layer and so forth, and we have the cloud side, and one of the great opportunities, and I think we've shown a lot of this happening with the ability to create something like Autonomous Database, is to combine all these things. Right, we have such a broad portfolio of really cool technology that by itself is okay, but if you combine the things it really becomes awesome. You cannot create autonomous database without having autonomous learning. You cannot create those two and make them really safe without also controlling the firmware on the hardware and so forth. So by being able to combine all these layers, and by having a really great relationship across the teams within the company, that opens up a lot of opportunities to do stuff really quickly. And having the scale for that. I think that has been, for the last few years, a really great thing, but I can see that being one of the advantages that we have going forward. We have Oracle Fusion Applications, which is incredibly popular, and has great growth, and then we have that running on Oracle Cloud, that talks to Oracle Autonomous Database, so we bring all these pieces together. And no other SaaS vendor can do that, because they don't have these other pieces. They have one area, we have all of them. And so that's the exciting part for me, it's not so much about making my own world better, and having Linux be better, and Casewise and so forth, which is important, but that becoming part of the bigger picture. And that's the exciting part. >> Well, Oracle's always invested in RND, we've made that point many, many times. Whether it's database, you know Fusion was a painful but worthy effort, the whole public cloud piece, obviously many acquisitions, but the investments that you've made in open-source as well, Wim, you're a great spokesperson, and a great representative of the open-source community generally, and then Oracle specifically, so thanks very much for coming on theCUBE and sharing with us the state of the penguin, and best of luck. >> You're welcome. Thank you, thanks for having me. >> Alright, and thank you for watching, everybody. This is Dave Vellante for theCUBE. We'll see you next time. (cheerful music).

Published Date : May 26 2020

SUMMARY :

the world, this is a Cube Conversation. Wim, it's great to have you on, is my normal outfit, so So, of course, you know a lot of people and so the open-source part is sort of and the contributions the things that we work on to improve that get that out of the way and the move to cloud, and get it to market, but the point is, And so that way we can in the public cloud, hybrid, et cetera. And so the early customer to put stuff in the cloud. and also, of course, the headache. back in the day when there We have the servers, we have the storage, acquisitions, but the investments Alright, and thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

May 2020DATE

0.99+

OracleORGANIZATION

0.99+

2009DATE

0.99+

2006DATE

0.99+

3 yearsQUANTITY

0.99+

two examplesQUANTITY

0.99+

BostonLOCATION

0.99+

Wim CoekaertsPERSON

0.99+

1-terabyteQUANTITY

0.99+

one exampleQUANTITY

0.99+

8 microsecondsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

8-terabyteQUANTITY

0.99+

JavaTITLE

0.99+

JavascriptTITLE

0.99+

4-terabyteQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

San Francisco Moscone CenterLOCATION

0.99+

October 25th, 2006DATE

0.99+

MySQLTITLE

0.99+

thousandsQUANTITY

0.99+

four hoursQUANTITY

0.99+

OpenSSLTITLE

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

HeartbleedTITLE

0.98+

two thingsQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

tomorrowDATE

0.98+

nine-hundredQUANTITY

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

WimPERSON

0.98+

gLinuxTITLE

0.98+

GitHubORGANIZATION

0.98+

fourteen years agoDATE

0.98+

NEEDS EDITS, DO NOT PUBLISH Wim Coekaerts, Oracle


 

>> From theCUBE studios in Palo Alto and Boston, connecting with thought-leaders all around the world, this is a Cube Conversation. >> Hi everybody, this is Dave Vellante. Welcome to this Cube Conversation. We're really excited to have Wim Coekaerts in, he is the senior vice-president of software development at Oracle. Wim, it's great to have you on, and, you know I often say I wish we were face-to-face but if we were you'd have to cut off my tie, cause developers and ties just don't go together. >> No, I know, and this is my normal outfit, so this is me wherever I go. Hi again, good to see you. >> Yeah, great to see you. So, of course, you know a lot of people are confused about Oracle, and open-source, they say "Oracle? Open-source? What is that all about?" But I think you're misunderstood. People don't, first of all, realize you as the leader of the software-development community inside of Oracle, I mean, you've been involved in Linux since the early 90s. But you guys have a lot of committers, you do a lot. I want to talk about that. What is up with Oracle, and open-source? >> Ah, well, it's a broad question. So, you know, a couple of things. One is, we have many different areas within the company that are dealing with open-source. So we have the cloud team doing a lot of stuff around cloud SDKs and support for different languages like Python and Go, and of course Java and so forth, so they do a lot around ensuring that the Oracle ecosystem is integrated in the open-source tools that customers use, or developers use, Terraform companies and so forth. And then you have the Java team, and so forth. Java is open-source and then the Graal project, GraalVM which is a polyglot compiler that can run Java, and Python, and Javascript and so forth together in one. VM do really cool optimizations, that's an open-source project, also on GitHub. There's of course MySQL, which is along with Java, they're probably the two most popular and widely used open-source projects out there. There's VirtualBox which is of course also a very popular project that's open-source. There's all the work we do around Linux. And I think one of the things is that, when you have so many different areas, doing things that are for that area, then as a developer or as a customer, you typically just deal with that group. And what you see is, oh you're talking to the Java developers, so you know what's going on around Java. The Java developers might not necessarily say, "Oh well we also do MySQL, and we do Linux and VirtualBox and so forth," and so you get a rather myopic, narrow view of the larger company. When you add all these things up, and there will be one big slide that says "This is Oracle, these are all these open source projects," and there's multiple ways. One is, we have projects that we've open-sourced and all the code came from us and we made it publicly available, we're the main contributor and we get contributions back. There are other projects where we contribute to third-party in terms of enhancing things, like I said with the Cloud Team, and then in general something like Linux where we're part of an external project and we participate in development of that project at large. And so there's these three different ways, when you count up all the developers that we have that deal with open-source on a daily basis. And in terms of contributions, in terms of bug fixes, testing, and so forth, it's thousands, literally, full-time paid developers. And of course, all the projects are all either on GitHub or similar sites that are very popular. So yeah, I think the misunderstood is probably a lack of knowledge of the breadth of what we do. And, you know, our primary goal is to provide services and products to customers, and so the open-source part is sort of embedded in a development methodology. But that's not something we sell or market separately, we just work with customers and products and services, and so in some cases it's not well-understood. >> Yeah. Well, we're talking of course, we're talking about the state of the penguin, I think it's important for people to understand, Oracle got into the Linux game in the 90s, maybe the latter part of the 90s and Oracle, of course, wants to make Linux-- wants to make Oracle, it's applications and database run better on Linux, but as you're pointing out, your Linux distro, full support, end-to-end, thousands of people in your open-source community, and the contributions that you make to Linux, many if not most, they go upstream, everybody can benefit from those, but of course you want an Oracle distro that is going to make Oracle stuff run better, that's always kind of been the Oracle way. >> Well, so, yes, two things though. One is, so everything we do is upstream. So we have no Linux patches that are not contributed upstream; There's no proprietary code in Oracle Linux at all, it's all completely open, publicly available: the source code, the change log, all the commits, it's fully open and public, which sometimes is not well-understood, but it's completely open. And, everything we do in terms of feature development or functionality or bug fixes goes upstream to the Linux kernel mail-list. It's actually, it's the only way to be able to manage a Linux distribution and be a Linux vendor is to live in that eco-system. Otherwise, the cost of maintaining your own fork, so to speak, is very high and it doesn't really solve the problem. Now, the functionality we work on obviously is focused on making Oracle products run better, making Oracle Cloud run better, and so forth. However, again, what's important to understand, though, is an Oracle database is a program running on an operating system. It does IO, it does networking, it deals with memory management, lots of processing. So, for the most part, the things that we work on to improve that helps everyone out, right? It helps every other database run better, or helps every other language run better. So none of these changes are specific to Oracle, they're just things that we found doing performance benchmarks and testing and so forth, where we say "Hey, if Linux did the following, it would make boot-up faster. Now boot-up has nothing to do with the database. But our customers run on 1-terabyte, 4-terabyte, 8-terabyte systems, and so booting up, and Linux starting up, and cleaning up memory takes a long time. So we want to reduce that from an availability point of view. So here, we're now talking about just enterprise for you. So there's this broad set of things we work on that definitely help us, but they're actually really completely generic and help everyone out. >> Yeah, that's great. So I wanted to kind of get that out of the way and help our audience understand that. So let's get into it a little bit; What are you seeing, what's going on in IT, pick your observation space and your vision of what you see happening out there. >> Well, you know, it's very interesting, it's sort of, there's two... there's sort of two worlds, right, there's the cloud world and the move to cloud, and there's the on-premises world, where people run their systems on their own. And, one of the things that we've learned is, when you talk about machine-learning, obviously, is something that's very popular these days, and automation. And so in order to rely on machine-learning well, and have algorithms that are very effective, you need lots of data. And so being a cloud vendor, and having Linux in our cloud on tens of thousands, or hundreds of thousands of servers, or more, allows us to have a view of how an operating system works across an incredibly large scale. So we get lots of data. And so for us to figure out which algorithms work well in terms of how can we do network optimizations, how can we discover anomalies on the storage site, and deal with it and so forth, we can do that at scale. And what's interesting is, how do we then bring that on-prem? Well, if we can get the data and the learning done, the training done, in our cloud directly, then when we provide that service also for people running Oracle Linux on premises then that will work. The alternative is to have point solutions where you provide something to a customer, and he needs to learn something from small amounts of data. That doesn't work so well. So I think having both worlds, on-prem and cloud directly, allows us to kind of benefit from that. And I think that's important, because lots of customers are interested in going to cloud. Many of the enterprises have not yet. You know, they're starting, but there's still a huge on-premises space that's important. And so by being able to get them familiar with how these things work at scale, autonomy is again important, right, Autonomous Database is incredibly popular and so forth, that allows us to then say, "Here, try these things out here, it's a service. We can show you the benefits right away," and then as that improves we bring that, to a certain extent, on-premises as well. And then they can have it in both places. And that, I think, is something, again, that's relatively unique but also very important, is that we want to provide services and products that act similarly on-premises as well as in cloud, because at some point when people move we want to make that transition seamless. And what you have today for the most part is one world that's on-prem, and then the cloud world is completely different. And that is a big barrier of moving, and so we want to reduce that, we can run the same operating system local as well as cloud, you can the same functionality, and then that helps transition people over much easier. >> Yeah, well Oracle actually was one of the -- I think Oracle was the first company to actually market same-same, you actually used that term. Others put forth that concept, but Oracle was the first to announce products like Cloud at Customer, that were same-same, now it took some time to actually get it perfected, and get it to market, but the point is, and we've written about this, is Oracle, because of the ascendancy of cloud, flipped and has a cloud-first mentality, and you just kind of referenced that, you just said, "And you can bring that to on-prem." So I wonder if you could talk about that cloud-first mentality, and the impact on hybrid. >> So yeah, I think the cloud-first part is of course in cloud we work on services moreso than products that we deliver. And there's a number of things that are happening. So one is that we obviously continue to provide products to customers, you can download Oracle Linux, you can download the database and what not, you can install it on your own, you can do the traditional way of working. Then in the cloud-world, what typically happens is "Oh, I use a database service. I'm not installing anything, I push a button and I get an IP address and a SQL that connects extremely quickly to the database." And we take care of everything underneath that is on this database. Now, in order to do that, you need a whole infrastructure in place, you need log-in agents, you need a back-end that captures all that stuff, you need monitoring tools, you need all the automation scripts for bringing the service up and monitor it. And so, that takes a lot of time to do right, and we learn a lot by doing this. And so the cloud-first part of these services means that we get to experience this ourselves with direct access to everything. Now taking that service with all of the additional features like autonomy, and bringing that to an on-premises world, we have to make sure we can package that so that all these pieces around it go along with it. And that takes a little bit more time, so we can do everything at the same time. And so what we've done with Autonomous Database is we created everything in Oracle Cloud, we have the whole system running really well, and then we've been able to sort of package that and shrink it into something that can be installed on-premises, but then connected into Oracle Cloud again. And so that way we can get all the telemetry over the metric, and that allows us to scale. Because part of providing a cloud service that runs on-prem in the customer environment is that we need to be able to remotely manage that similar to how that runs in our own cloud. Right, otherwise it doesn't scale. And so that takes a little bit of time, but we've done all that work, and now with Cloud at Customer Database that's really in place. >> Yeah, you really want to have that same cloud experience, whether with on-prem, in the public cloud, hybrid, et cetera. So, I want to explore a little bit more who is using Oracle Linux, and what's the driver for using it. Can you describe maybe some of the types of customers and why they buy? >> Sure, so we started this fourteen years ago, in 2006, October 25th, 2006. I remember that day very well; Penguins on stage and a big launch for Oracle Linux in San Francisco Moscone Center. So, look, the initial driver for Oracle Linux was to ensure that Oracle database customers or Oracle product customers had a good operating system experience, and the ability to be able to handle critical issues when that occurs, because typically a database runs the company's critical data: the most essential stuff that a company has is typically in a database, an Oracle database. And so when that thing has issues with the operating system, you don't want just to talk to multiple vendors and have finger-pointing, and having to explain to an operating system vendor how the database works. In the Unix world, we had a good relationship with the OS vendors, and the hardware vendors, they were the same. And they knew our products really well, and in the Linux world, that was very different. The OS vendor basically did not want to understand or learn anything about the products living on top. And so while to a certain extent that makes sense, it's an enterprise world where time is of the essence, and downtime needs to be limited absolutely. We can't have these arguments and such. And that was the driver, initially, for doing Oracle Linux. It was to ensure there was a Linux distribution really backed by us, that we could fix, that we could fully support. That was completely the original intent. And so the early customer base was database customers. Database and middleware. Mostly database. But that has then evolved quickly, and so what happened was, people say "Look, I have a thousand servers, a hundred run Oracle, so we'll run Oracle Linux on those hundred, and we'll run something else on those other nine-hundred." Now after a year or so, they realize that our support is really good; We fix all these issues, and so then they're like "Why are we having two Linux distributions? This thing works really well, it runs any application, it's fully compatible, so we'll do a thousand with Oracle Linux." And so the early days, the first few years, was definitely Oracle Database as the core driver, and then it sort of expanded to the rest of the estate. And over the years, we've added lots of features and functionality, like Ksplice, and so forth. We have an attractive pricing model for running on servers, and so now lots of our customers have a very small Oracle percentage running and many other things running. So it's really become a all-or-nothing play in the Linux space, and we're well-known now, so it's actually very good. >> You just mentioned Ksplice. We've been talking about cloud, and on-prem, and hybrid. Let's talk about security, because security really is a differentiator, particularly if you're going to start to put stuff in the cloud. Talk about Ksplice specifically, but generally security and your policy there. >> So, "Security first" is sort of what you hear us say and do, in everything we do. The database obviously security, on the Linux site security matters. Ksplice as a technology is there to do critical bug-fixing and make sure that we can apply security vulnerability fixes without affecting the customer, and not have downtime. And if you look at most of the cases or many of the cases where you have security vulnerabilities and exploits, it tends to be because systems were not patched. Why were they not patched? Well not that our customer doesn't understand that it's important, but it's a whole train of events that needs to happen. You have to, you get notified that there's a security issue in your operating system or application. Then, well, an application typically means it's a multi-layered setup. So if you have to bring your database server down, then you first have to coordinate with the application users to bring the app server down, cause that talks to the database. So to patch one system, you basically have to bring down the whole application stack. You have to negotiate with the DBAs, you have to negotiate with the app admins, you have to negotiate with the user. It takes weeks to do that and find time. Well during that time, you're vulnerable. So the only way, really, to address security in a scalable and reducing that window of time is to do it without affecting the customer. And so Casewise is something that, it's a company we acquired in 2009, and have since evolved in terms of capabilities, and so it allows us to patch the Linux terminal without downtime. We lock the kernel for 8 microseconds. It's literally no downtime. You don't have to bring down applications, the user doesn't see it, there's no hang, there's no delay. And so by doing that, you can run a Linux operating system, or gLinux, and you can be fully patched on a system that hasn't rebooted for 3 years. You don't even know it. And so by doing that type of stuff, it makes customers more secure, and it avoids them-- It saves them a lot of money in terms of dealing with project management and so forth, but it really keeps them secure. And so we do that for the Linux kernel, we do that for some of the libraries on top that are critical like OpenSSL and 2 LVC, and, you know one example-- I can give you two examples. So one example is, Heartbleed was this bug in OpenSSL a number of years ago. And so everyone had to patch their SSH server. And that meant, basically, systems around the world had to reboot. Like a whole IT reboot across the world. With Ksplice today, if Heartbleed were to happen tomorrow, we would be able to patch this online for all the Oracle Linux customers without any downtime. No reboots, no restarting of applications, everything keeps running. The amount of money saved would be massive, and also, of course, the headache. Another example is, and this was in Oracle Cloud, when some of these CPU bugs that happened a few years ago that were rather damaging on the cloud side, where you could basically see memory potentially of other CPUs running, the cloud is incredibly critical. We were basically able to basically patch our entire cloud in four hours. And the customer didn't know, right, a hundred and twenty million patches, or something, that we applied within four hours, all online, without any downtime. And so that technology has been really helpful, both for us to run our cloud, but the exact same patches and same fixes go to customers on-premises as well. But this comes back to the whole, what we do in cloud we also do for customer. And I think that's a unique thing that we have at Oracle which is quite fascinating. The operating system we run for our customers, the operating system that's the host part of VMs, is the exact same binary and source code that we make available, just to be clear, the exact same binaries are the ones that you run as a customer on-premises. So if you run Oracle Linux with KVM, you run VMs, you're actually running the exact same stuff as we run underneath our customer's stuff. Nobody else does that, everyone else has a black box. So I think that helps a little bit with transparency as well. >> Yeah, and that homogeneity just creates an environment, you're talking about that sort of security mindset, it's critical, you're not just bolting it on, it's part of the culture. But you started your career, and then of course you were a Linux person when you came to Oracle, but then I think you spent some time in database, back in the day when there were serious database wars going on, before Oracle became the king of database. So now you've got, obviously, this great portfolio, and a lot of really sharp software developers; What should we expect going forward, from Oracle? What should we look for? >> You know, I was talking to some, I was welcoming some interns to the company, for their summer internship yesterday, and one of the things I mentioned to them was that -- so cloud obviously gives us a lot of opportunities, but there's a number of things. One is, we have such a breadth of applications and software and hardware together. We have the servers, we have the storage, we have the operating systems, we have the database layer and so forth, and we have the cloud side, and one of the great opportunities, and I think we've shown a lot of this happening with the ability to create something like Autonomous Database, is to combine all these things. Right, we have such a broad portfolio of really cool technology that by itself is okay, but if you combine the things it really becomes awesome. You cannot create autonomous database without having autonomous learning. You cannot create those two and make them really safe without also controlling the firmware on the hardware and so forth. So by being able to combine all these layers, and by having a really great relationship across the teams within the company, that opens up a lot of opportunities to do stuff really quickly. And having the scale for that. I think that has been, for the last few years, a really great thing, but I can see that being one of the advantages that we have going forward. We have Oracle Fusion Applications, which is incredibly popular, and has great girth, and then we have that running on Oracle Cloud, that talks to Oracle Autonomous Database, so we bring all these pieces together. And no other SaaS vendor can do that, because they don't have these other pieces. They have one area, we have all of them. And so that's the exciting part for me, it's not so much about making my own world better, and having Linux be better, and Casewise and so forth, which is important, but that becoming part of the bigger picture. And that's the exciting part. >> Well, Oracle's always invested in RND, we've made that point many, many times. Whether it's database, you know Fusion was a painful but worthy effort, the whole public cloud piece, obviously many acquisitions, but the investments that you've made in open-source as well, Wim, you're a great spokesperson, and a great representative of the open-source community generally, and then Oracle specifically, so thanks very much for coming on theCUBE and sharing with us the state of the penguin, and best of luck. >> You're welcome. Thank you, thanks for having me. >> Alright, and thank you for watching, everybody. This is Dave Vellante for theCUBE. We'll see you next time. (cheerful music).

Published Date : May 22 2020

SUMMARY :

the world, this is a Cube Conversation. Wim, it's great to have you on, is my normal outfit, so So, of course, you know a lot of people and so the open-source part is sort of and the contributions the things that we work on to improve that get that out of the way and the move to cloud, and get it to market, but the point is, And so that way we can in the public cloud, hybrid, et cetera. And so the early customer to put stuff in the cloud. and also, of course, the headache. back in the day when there We have the servers, we have the storage, acquisitions, but the investments Alright, and thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

2009DATE

0.99+

2006DATE

0.99+

3 yearsQUANTITY

0.99+

BostonLOCATION

0.99+

two examplesQUANTITY

0.99+

1-terabyteQUANTITY

0.99+

8 microsecondsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

one exampleQUANTITY

0.99+

8-terabyteQUANTITY

0.99+

Wim CoekaertsPERSON

0.99+

JavaTITLE

0.99+

JavascriptTITLE

0.99+

4-terabyteQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

San Francisco Moscone CenterLOCATION

0.99+

October 25th, 2006DATE

0.99+

MySQLTITLE

0.99+

thousandsQUANTITY

0.99+

four hoursQUANTITY

0.99+

OpenSSLTITLE

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

HeartbleedTITLE

0.99+

oneQUANTITY

0.98+

two thingsQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

nine-hundredQUANTITY

0.98+

tomorrowDATE

0.98+

bothQUANTITY

0.98+

gLinuxTITLE

0.98+

todayDATE

0.98+

GitHubORGANIZATION

0.98+

fourteen years agoDATE

0.98+

Oracle CloudTITLE

0.97+

Wim Coakerts, Oracle | CUBE Conversation, May 2020


 

>> Announcer: From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a Cube Conversation. >> Hi everybody, this is Dave Vellante and welcome to this Cube Conversation. Really excited to have Wim Coekaerts and he is the senior vice president of software development at Oracle. Wim, it's great to have you on. And you know what I often say I wish we were face to face but if we were you'd have to cut off my tie 'cause developers and ties just don't go together. >> No, I know, and this is my normal outfit so this is me, wherever I go. Hi again, good to see you. >> Yeah, great to see you. So of course, you know a lot of people are confused about Oracle and open source. They say, Oracle, open source? What is that all about? But I think you misunderstood. People don't first of all realize you as the leader of the software development community inside of Oracle, I mean, you've been involved in Linux since the early '90s but you guys have a lot of committers. You do a lot, I want to talk about that. What is up with Oracle and open source? >> Well, it's a broad question. So you know, a couple of things. One is we have many different areas within the company that are dealing with open source, right? So we have the cloud team doing a lot of stuff around the cloud SDKs and support for different languages like Python and go and of course Java and so forth. So they do a lot around ensuring that the Oracle ecosystem is integrated in the open source tools that customers use, or developers use Terraform, so on and so forth. And then you have the Java team, and so of course Java is open source. And then, the Graal project, GraalVM, which is a polyglot compiler that run Java and Python and JavaScript and so forth together in one VM, do really cool optimizations, that's an open source project. Also on GitHub, there's of course MySQL which is along with Java, they're probably the two most popular and widely used open source projects out there. There's VirtualBox which is of course also a very popular project that's open sources is all the work we do around Linux. And I think one of the things is that when you have so many different areas doing things that are for that area, then as a developer or as a customer, you typically just deal with that group and what you see is, oh, you're talking to the Java developers so you know what's going on around Java. The Java developers might not necessarily say, oh, and we also do MySQL and we do Linux and VirtualBox and so forth. And so you get sort of a rather myopic narrow view of the larger company. When you add all these things up and there would be one big slide that says, "This is Oracle, these are all these open source projects there". And there's multiple ways, right? One is we have projects that we've opened sourced and all the code came from us and we made it publicly available. We are the main distributor and we get contributions back. There are other projects where we contribute to third party in terms of enhancing things, like a separate the cloud team. And then in general, something like Linux where, you know, we're part of an external project and we participate in the development of that project at large. And so there's these three different ways when you count up all the developers that we have that deal with open source on a daily basis and in terms of contributions, in terms of both Pyxis testing and so forth, it's thousands, literally, full time developers. And of course all the projects is on GitHub or similar sites that are very popular. So yeah, I think the misunderstood is probably a lack of knowledge of the breadth of what we do. And our primary goal is to provide services and products to customers. And so the open source part is sort of embedded in the development methodology, but that's not something we sell or market separately. We just work with customers and products and services. And so in some cases it's not well understood. >> Yeah, well, we're talking, of course we're talking about the state of the Penguin. I think it's part of what people understand. I mean, Oracle got into the Linux game, in the '90s, maybe the latter part of the '90s and Oracle of course wants to make Linux, wants to make Oracle its applications and database run better on Linux. But as you're pointing out you're Linux distro, full support, end-to-end, thousands of people in your open source community and the contributions that you make to Linux, many if not most, they go upstream, everybody can benefit from those. But of course you want an Oracle distro that is going to make Oracle stuff run better. That's always kind of been the Oracle way. >> Well, so yes, two things. The one is that, so everything we do is upstream. So we have no Linux patches that are not contributed upstream. There's no proprietary code in Oracle Linux at all. It's all completely open, publicly available. The source code, the change log, all the commits, everything. It's fully open and public, which sometimes is not well understood, but it's completely open. And everything we do in terms of feature development or functionality or bug fixes goes upstream to the Linux kernel mailers. It's actually, it's the only way to be able to manage a Linux distribution and be a Linux vendor is to live in that ecosystem. Otherwise, the cost of maintaining your own forks so to speak is very high and it doesn't really solve problems. Now the functionality we worked on obviously is focused on making Oracle products run better, making Oracle cloud run better and so forth. However, again, what's important to understand though is an Oracle database is a program running on an operating system that does IO, it does networking, it does memory, it deals with memory management, lots of processes. So for the most part, the things we work on to improve that, helps everyone out, right? It helps every other database run better or it helps every other language run better. So none of these changes are specific to Oracle. They're just things that we found doing performance benchmarks and testing and so forth. But we say, "Hey, if Linux did the following, it would make boot up fast." Now boot up has nothing to do with the database. But if our customers run on one terabyte, four terabyte, eight terabyte systems, and so booting up and Linux starting up and cleaning up memory takes a long time. So we want to reduce that from an availability point of view. So here we're now talking about just enterprise, right? And so there's this broad set of things we work on that definitely help us, but they're actually really completely generic and help everyone customer. >> Yeah, that's great, good. So I wanted to kind of get that out of the way and help our audience understand it. So let's get into it a little bit. What are you seeing, what's going on in IT? Pick your observation space and your vision of what you see happening out there? >> Well it's very interesting. There's sort of two worlds, right? There's the cloud world and move to cloud and there's the on-premise world where people run their systems on their own. And one of the things that we've learned is, when you talk about machine learning obviously is something that's very popular these days and automation. And so in order to rely on machine learning well and have algorithms that are very effective, you need lots of data. And so being a cloud vendor and having Linux in our cloud on tens of thousands or hundreds of thousands of servers or more allows us to have a view of how an operating system works across incredibly large scale. So we get lots of data and so for us to figure out which algorithms work well in terms of, how can we do network customizations, how can we discover anomalies on the storage side and deal with it and so forth, we can do that at scale. And what's interesting is how do we then bring that to on-prem? Well, if we can get the data and the learning done the training done in our cloud directly, then when we provide that service also to people running Oracle Linux on-premises, then that will work. The alternative is to have point solutions where you provide something to a customer and he needs to learn something from small amounts of data. That doesn't work so well. So I think having both worlds on-prem and cloud directly allows us to kind of benefit from that. And I think that's important because lots of customers are interested in going to cloud. Many of the enterprises have not yet, you know, they're starting, but there's still a huge on-premises space that's important. And so by being able to get them familiar with how these things work at scale, autonomy is again important, right? Autonomous database is incredibly popular and so forth. That allows us to then say, "Here, try these things out here. "It's a service, we can show you the benefits right away". And then as that improves, we bring that on to a certain extent on-premise as well and then they can have it in both places. And that I think is something, again, that's relatively unique but also very important is that we want to create an... we want to provide services and products that act similarly on-premises as well as the cloud. Because at some point when people move, we want to make that transition seamless. And what you have today for the most part is one world that's on-prem and then the cloud world is completely different and that is a big barrier of moving. And so we want to reduce that. You can run the same operating system local as well as cloud, you can get the same banality and then that helps transition people over much easier. >> Yeah, well, Oracle actually was one of the... I think but Oracle was the first company to actually market same-same, you actually use that term. Others put forth that concept, but Oracle was the first to announce products like cloud to customer that was same-same now it took some time to actually get it perfective and get it to market. But the point is, and we've written about this is that Oracle, because of the ascendancy of cloud flipped and has a cloud first mentality and you just kind of referenced that you just said, "And you can bring that to on-prem". So I wonder if you could talk about that cloud first mentality and the impact on hype? >> So yeah, I think the clouds first part is of course in cloud we work on services more so than products that we deliver and there's a number of things that are happening. So one is we obviously continue to provide products across you can download Oracle Linux, you can download the database in web blog, you can install it on your own, right? You can do to the traditional way of working. Then in a cloud world, what typically happens is, oh, I use a database service. I'm not installing anything. I push a button and I get an IP address and a SQL, and a connect string and connect to a database. And we take care of everything underneath the database. Now, in order to do that, you need to hold infrastructure in place, right? You need lugging agents, you need a backend that captures all that stuff, you need monitoring tools, you need all the automation scripts for bringing this service up and monitor it. And so that takes a lot of time to do, right? And we learned a lot by doing this. And so the cloud first part of the services means that we get to experience this ourselves with direct access to everything. Now taking that service with all of the additional features like autonomy and bringing that to an on-premises world, we have to make sure we can package that so that all these pieces around it go along with it. And that takes a little bit more time, so we can't do everything at the same time. And so what we've done with autonomous database is we created everything in Oracle cloud, you have the whole system running really well. And then we've been able to sort of package that and shrink it into something that can be installed on-premises but then connected into Oracle cloud again. And so that way we can get all the telemetry, all the metrics, and that allows us to scale because part of providing a cloud service that runs on-prem in the customer environment is that we need to be able to remotely manage that, similar to how we manage something that runs in their own cloud, right? Otherwise it doesn't scale. And so that takes a little bit of time, but we've done all that work and now we've got our customer database that that's really in place. >> Yeah, you really want to have that same cloud experience, whether it's on-prem, in the public cloud, hybrid, et cetera. So I want to explore a little bit more. Who is using Oracle Linux and what's the driver for using it? Can you describe maybe some of the types of customers and why they buy? >> Sure, so we started 14 years ago, right? 2006, October 25th, 2006 (giggles). I remember that day very well. Penguin's on stage and a big launch for Linux in San Francisco Moscone Center. So look, the initial driver for Oracle Linux was to ensure that Oracle database customers or Oracle product customers had a good operating system experience, right? And the ability to be able to handle critical issues when that occurs because typically a database runs the company's critical data. The most essential stuff that a company has is typically in a database, in Oracle database. And so when that thing has issues with the operating system, you don't want just to talk to multiple vendors and have finger pointing and having to explain to an operating system vendor how the database works. In the Unix world, we had a glitch relationship with the OS vendors and the hardware vendors. They were the same. And they knew our products really well, and in the Linux world that was very different. The OS vendor basically did not want to understand or learn anything about products living on top. And so, while, to a certain extent, that makes sense. It's an enterprise world where time is of the essence and downtime needs to be limited absolutely. We can't have these arguments and such. And so that was the driver initially for doing Oracle. So it was to ensure there was a Linux distribution really backed by us that we could fix and we could fully support, right? That was completely the original intent. And so the early customer base was database customers. Database and middleware, mostly database. So but that has then evolved quickly, and so, (clears throat) sorry. What happened was, people would say, "Look, have a thousand servers, a hundred run Oracle, "so we'll run Oracle Linux on those hundred "and we run, something else on those other 900." Now after a year or so, they realized that our support was really good. We fixed all these issues and so then they're like, "Why are we having two Linux distributions? "This thing works really well. "It's runs any application, it's fully compatible. "So we'll just go a thousand with Oracle Linux". And so the early days, the first few years was definitely Oracle database as the core driver and then it sort of expanded to the rest of the estate. And over the years (clears throat), we've added lots of features and functionality, like Ksplice and so forth. We have an attractive pricing model for running on servers. And so now lots of our customers have a very small Oracle percentage running and many other things running. So it's really become a all or nothing play in the Linux space and we're well known now, so it's been actually very good. >> You just mentioned Ksplice. I mean, we've been talking about cloud and on-prem and hybrid and let's talk about security because security really is a differentiator but particularly if you're going to start to put stuff into the cloud. Talk about Ksplice specifically, but generally security and your policy there. >> So security first is sort of what you hear us say and do in everything we do, right? The database obviously security on the Linux side, security matters, Ksplice as the technology is there to do critical bug fixing and make sure that we can apply security vulnerability fixes without affecting the customer and not have downtime, right? And if you look at, most of the cases or many of the cases where you have security vulnerabilities and exploits, it tends to be because systems were not patched. Why were they not patched? Well, not that a customer doesn't understand that it's important, but it's a whole train of events that needed to happen. You have to get notified that there's a security issue in your operating system or application. Then, well, an application typically means it's a multi-tiered set up. So if you have to bring your database server down, then you first have to coordinate with the application users to bring the app server down because that talks to the database. So to patch one system, you basically have to bring down all application stacks. You have to negotiate with the DBAs, you have to negotiate with the app admins, you have to negotiate with the user. It takes weeks to do that and find time. Well, during that time you're vulnerable. So the only way really to address security in a scalable way and reducing that window of time is to do it without effecting the customer, right? And so Ksplice is something that... It's a company we acquired in 2009 and have since evolved in terms of capabilities. And so it allows us to patch the Linux kernel without downtime, right? We lock the kernel for a microsecond, so it's literally no downtime. You don't have to bring down applications. The user doesn't see it. There's no hang, there's no delay. And so by doing that, you can run the Linux operating system, Oracle Linux, and you can be fully patched on a system that hasn't rebooted for three years and you don't even know it. And so by doing that type of stuff, it makes customers more secure and it avoids them... It saves them a lot of money in terms of dealing with project management and so forth, but it really keeps them secure. And so we do that for the Linux kernel. We do that for some of the libraries on up that are critical, like OpenSSL and glibc and one example, I can give you two examples. So one example is Heartbleed was this bug in OpenSSL a number of years ago and so everyone had to patch their SSH server. And that meant basically, systems around the world had to reboot, like a whole active reboot across the world. With the Ksplice today if Heartbleed were to happen tomorrow, we would be able to patch this online for all the Oracle Linux customers without any downtime. No reboots, no restarting of applications, everything keeps running. The amount of money saved would be massive, right? And also of course, the headache. Another example is, (clears throat) and this was an Oracle cloud when some of these CPU bugs that happened a few years ago that were rather damaging on the cloud side, right? Where you could basically see memory of potentially of other machines running that the cloud it's incredibly critical. We were basically able to patch our entire cloud in four hours and the customer didn't know, right? 120 million patches or something that we applied within four hours all online without any down time. And so that technology has been really helpful both for us to run our cloud, but the exact same patches and same fixes go to customers on-premises as well. But this comes back to the whole what we do in cloud, we also do for customer, and I think that's a unique thing that we have at Oracle, which is quite fascinating, right? The operating system we run for our customers, the operating system that's the host for the VM is the exact same binary and source code that we make available, just to be clear. The exact same binaries are the ones that you run as a customer on premises. So you run Oracle Linux with KVM, you run VMs, you're actually running the same stuff as we do for our... That we run underneath our customer stuff. Nobody else does that. Everyone else has a black box. So I think that helps a little bit with transparency as well. >> Yeah, and that homogeneity just creates an environment you're talking about sort of the security mindset is critical. You're not just bolting it on, it's part of the culture. Look, you were, you know, started your career, and then of course you were a Linux person when you came to Oracle, but then I think you've spent some time in the database back in the day when there were some serious database wars going on before Oracle, became the king of database. So now you've got obviously this great portfolio and a lot of really sharp software developers. What should we expect going forward from Oracle? What should we look for? >> I was welcoming some interns to the company, (clears throat) for their summer internship yesterday. And one of the things that I, (clears throat) I'm sorry. One of the things I mentioned to them, was that one of the... So cloud obviously gives us a lot of opportunities, but there's a number of things. One is we have such a breadth of applications and software and hardware together, right? We have the servers, we have the storage, we have the operating systems, we have the database layer and so forth, and we have the cloud side. And one of the great opportunities and I think we've shown a lot of this happening with the ability to create something like autonomous database is to combine all these things, right? We have such a broad portfolio of really cool technology that by itself is okay, but if you combine the things, it really becomes awesome, right? You cannot create autonomous database without having autonomous Linux, right? You cannot create those two and make them really safe without also controlling the firmware on the hardware and so forth. So by being able to combine all these layers and by having a really great relationship across the teams within the company, that opens up a lot of opportunities to do stuff really quickly and having the scale for that. I think that has been for the last few years a really great thing but I can see that being one of the advantages that we have going forward, right? We have Oracle Fusion Applications, which is incredibly popular and has great growth. And then we have that running on Oracle cloud that talks to our autonomous database. So we bring all these pieces together and no other SaaS vendor can do that because they don't have these other pieces. They have one area, we have all of them. And so that's the exciting part for me is basic... It's not so much about making my own world better and having Linux be better and Ksplice and so forth, which is important, but that becoming part of the bigger picture. And that's the exciting part. >> Well, Oracle has always invested in R&D. We've made that point many many times, whether it's database, fusion was a painful but worthy (giggles) effort. The whole public cloud piece, obviously many acquisitions but the investments that you've made in open source as well. Wim, you're a great spokesperson and a great representative of the open source community generally, and an Oracle specifically. So thanks very much for coming on theCUBE and sharing with us the state of the Penguin. The best of luck. >> You're welcome. Thank you, thanks for having me. >> All right, and thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (soft music)

Published Date : May 19 2020

SUMMARY :

leaders all around the world. and he is the senior vice president Hi again, good to see you. So of course, you know a lot of people And so the open source part and the contributions So for the most part, the things get that out of the way and the learning done the training done and the impact on hype? And so that way we can get of the types of customers And the ability to be able and your policy there. and make sure that we can apply and then of course you were a Linux person We have the servers, we have the storage, of the open source community generally, You're welcome. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

2009DATE

0.99+

May 2020DATE

0.99+

Wim CoakertsPERSON

0.99+

OracleORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Wim CoekaertsPERSON

0.99+

2006DATE

0.99+

HeartbleedTITLE

0.99+

two examplesQUANTITY

0.99+

thousandsQUANTITY

0.99+

JavaScriptTITLE

0.99+

PythonTITLE

0.99+

one terabyteQUANTITY

0.99+

JavaTITLE

0.99+

tomorrowDATE

0.99+

LinuxTITLE

0.99+

OneQUANTITY

0.99+

yesterdayDATE

0.99+

tens of thousandsQUANTITY

0.99+

MySQLTITLE

0.99+

OpenSSLTITLE

0.99+

San Francisco Moscone CenterLOCATION

0.99+

four hoursQUANTITY

0.99+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

one exampleQUANTITY

0.98+

eight terabyteQUANTITY

0.98+

VirtualBoxTITLE

0.98+

KspliceORGANIZATION

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

twoQUANTITY

0.98+

120 million patchesQUANTITY

0.98+

first partQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

four terabyteQUANTITY

0.98+

two thingsQUANTITY

0.97+

early '90sDATE

0.97+

WimPERSON

0.97+

Paul Cormier, Red Hat | Red Hat Summit 2020


 

>> From around the globe its theCUBE with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi I'm Stu Miniman and this is theCUBE's coverage of a Red Hat Summit 2020. Of course this year the event is virtual. We're bringing all the people on theCUBE from where they are and really happy to bring back to the program, one of our CUBE alumni, Paul Cormier, who is the president and CEO of Red Hat. Of course the keynote and you and I spoke ahead of the show. Paul great to see you and thanks so much for joining us. >> My pleasure, always great to see you Stu. My pleasure. >> All right, so Paul lots have changed since last time we got together for summit. One things stayed the same though. So, you know, the big theme, I heard in your keynote, you talked about open hybrid cloud of course. We've been talking about cloud for years when you ran the product theme, you know, making Red Hat go everywhere is something that we've watched, you know, that move. Is anything different when you're talking to customers, when you're talking to your, the product themes, you think about the times were in, why is open hybrid cloud not a buzzword but hugely important in the times were facing? >> Because the big premise to open hybrid cloud is that customers, cloud has become part of people's infrastructure. I've seen very few if any true enterprise customers that are moving everything, every app to one cloud. And so I think what people really realized once they started implementing clouds, part of their infrastructure was that you going to always have applications that are running bare metal. Some are virtual machine maybe on top of VMware it might been a private cloud, and not many people saying you know what the public clouds are all so different from each other I might want to run one application for whatever reason in one in a different one or another I think they started to realize the actual operational cost to that, the security cost of that and even more mobility the development cost of that from the application perspective and now having five silos up there now how that's so costly so now our whole premise since the beginning of open hybrid cloud has been to give you that level playing field to have those things all the same no matter where the application wants whether experimental virtual machine private multiple public cloud and so in the long run as customers start to start to really go to cloud first application development and they can still manage that under one platform in a common way but at the same time managed develop secure it but at the same time they can manage develop and secure their legacy applications that are also on linux as well in the same way so I think in the long run it really brings it together and saves money and efficiency in those areas. >> Yeah it's I always loved I look over time we have certain words that we think we know what they mean and then they mature over time let's just say we'll start with the first piece of what you're talking about open we live through those of us that have been through that the really ascendancy of open-source is in the early days open was free and we joke it was free like puppies >> Yeah. but today open source of course is very prevalent we see it all over the place but give from an open hybrid cloud why open is important today and what customers should think, how do customers think about that today? >> There's probably two most misunderstood things with open so first thing is that open source is a development model, first of all. I always say it's a verb not a noun, I even say well think internally and externally. We're not an open source company, we're an enterprise software company with an open source development model. So you think about that, that's what that's really important. Why is the open source development model so important? It's important because everyone has the same opportunity in terms of the features of within the code everyone has the same opportunity to contribute. The best technology wins that's how it works in the upstream community is it's not a technology driven by one company that may have a one company agenda. It's really a development process that allows the best technology to win and I think that's one of the main things and one of the main reasons why you see all the innovation frankly in the last five years around infrastructure and development, associated pieces and tools around that of being in and around Linux because Linux was available, it was powerful, it was open when people wanted to develop for when people wanted to develop kubernetes for example, they had to make changes to the Linux kernel in order to do that it did work because they could and so those are the things that make it really important as a development model and I think those are the things that get confused a lot. I think the other things that get confuses a lot of people think that, "hey if I have this great technology and I just open-source is that it'll all just work, everyone will come, now that's not the case. The things that really, the projects that really succeed of an open-source perspective are the problems that are common and horizontal across a big group of people so they're trying to solve similar problems and that's one of the things that we found as you go further up the stack the length typically the less community is involved it's the horizontal layers where you need whether you're in banking or retail or telco or whatever they're all the same, those are the pieces where open-source really fits well. >> Alright so the second piece you talk about hybrid I think back to the early days Paul when cloud was first defined and we talked about public and private cloud we had discussions of hybrid cloud and multi clouds and the concern that I have is it was very much an infrastructure discussion and it was pieces and the vision that we always have is, were customers to actually get value is, the total solution needs to be more valuable than the sum of its parts. So it's really about hybrid applications about where my data lives, so do you agree with some of those things I'm saying how does Red Hat look at it and from your team i do get lots of the application and app dev discussion which I always find even more meaningful than arguing over ontologies of how you build your cloud. >> Everything you said is all about the application if you look at just where we started with linux just along what did Linux bring to the enterprise when we first started rally me you and I talked about this earlier that was the thing that really opened things up. The enterprise's started buying Linux they right they started buying Linux for Linux for $29.95 at the book stores but when I first came on board we talked to some of the banking customers in there, they said well we love this technology but every time you guys change a release on my applications breaker when I get new hardware it doesn't work etc. So it's all about the application Linux is better about that all the time from the beginning of time what hybrid it really means here, is that I can run that seamlessly across wherever that footprint is going to live and so I think that's also one of the things that gets confused a bit. When the cloud first started, the cloud vendors were telling people that every application was going to move to one cloud tomorrow right? We knew that was not practical, that's the other thing from open-source developers, we look at a practical perspective, we look back in 2007 I just looked at just to prepare for the note I just put up to the company. Back in 2007 at the summit I talked about any application anywhere anytime. That's really the essence of what hybrid is here, so what we found here is what every application is impractical for every application to move to one cloud and so cloud is powerful but it's become part of people's development and operations and security environment so now as we stitch that in may make that common for those three things for the operation security in development more application development world that's where the power is. So I see the day where application developers and application users won't know or care what platform the back-end day is coming from for whatever applications they're writing, they shouldn't care that should just happen seamlessly under the covers but having said that, that complicates thing and that's why management needs to be retooled with it as well. Sorry on that but I could talk about that for three days right? >> Yeah so as an industry we kind of argue about these and everybody feels that they understand the way the future should look. So Paul for a number of years it was, "we're going to build this stack "and let's have the exact same stack here and there." There were some of the big iron companies that did that a few years ago now you see some of your public cloud partners saying, "we can give you that same experience "that same hardware all the way "down to the chip level things are going to be the same." When I look at software companies, there's two that come to mind to live across dispersed environments. One is very much from a virtualization standpoint they design themselves to live on any hardware out there. Red Hat has a slightly different way of looking at things, so what's your take on kind of the stack and why is hybrid in that hybrid cloud model that you're building probably looks and sounds and feels different then I think almost anybody else out there? >> Well the cloud guys, they all have similar technologies underneath I mean most of it not all of its based on Linux but they're all different I mean remember the UNIX days I'm old enough to remember the UNIX day. That was the goal back then but like each hardware vendor did each cloud vendor is now taking that Linux or the Associated pieces with it and they have to make their changes to adapt to their environment and some of those changes don't allow for applications to be portable outside that environment, that's exactly like the OEM world of the past and so I hope some people hate it when I say this to make this a comparison but I really look at the cloud guys as a mainframe and certainly mainframe as and still does bring a ton of value to certain customer base and so if you're going to keep your application in that one place, a mainframe will all on you mainframe mentality will always stitch it to bet together better but that's not the reality of what customers are trying to do out there. So I really think you have to look at it that way it's not that much different in concept anyways to the OEM days whether from when they started running Linux and the thing that Red Hat's done that some of the others haven't for VMware for example, VMware they have no pieces that touch the application I mean they have some now they had photon, they had some of the other pieces that sort of tried to touch the application but at the end of the day we always concentrated in Linux and especially from a Red Hat perspective of keeping the environment the same, both from an application perspective and from a hardware perspective. Certainly when an application runs in the cloud, we don't have to worry about the hardware anymore but we still have to worry about the application and businesses are all about the application and so we always took that tack from both sides of that. I think that's one of VMware's weaknesses frankly is that applications don't run on hypervisors, they run on operating systems including when I say operating systems I mean containers because that is a Linux operating system. >> Yeah Paul a lot of good points you brought up there and it's interesting the mainframe analogy in the early days of cloud there were some that would throw stones and saying right you're rebuilding the mainframe and you're going to be locked in, this is going to be an environment so I'd love to get your thought you think about what's happening in application development, the rise of is you talked about containers and kubernetes serverless is out there there's that, "we want to enable the application developers but we don't want to get locked into some platform there. Talk about red-hat's role how your products are helping the ship, help customers make sure that they can take advantage of some of these new ways of building, maintaining and changing without being stuck on any specific platform or technology >> Well the first place, I believe I'm sure I will be corrected on this but we really are the only company that I can think of at this moment that is a hundred percent open source. Everything we do when our products go is open source based goes back upstream to the community for everyone to take advantage of so that's the first thing. I mean the second thing we do is one of the big fallacies is, open source has become so popular that people are confusing upstream projects with downstream products and so for us I'll use us as an example, I'll use Linux and I'll use kubernetes as an example, the Linux kernel we all built from the Linux kernel us, Susa, Ubuntu we all build from the Linux kernel but at the end of the day we all make choices when we bring that upstream work down to become a product. In our case we go upstream to rel, we go from fedora to sent us to rel. We all make choices, which file systems were going to package, what development environment we're going to to package, what packages werre gonna package and so when we get down to what's get deployed in the enterprise, those choices in what makes the difference of why by rel is slightly different than SUSE Linux which is slightly different than Canonical's upon - but they're all come from the same heritage, the same as the case with kubernetes is this sort of fallacy that kubernetes is the last time I checked it was 127 different kubernetes vendors out there. They're all just going to magically work together yes they all come from the same place but we have to touch the users face, we have to touch the kernel and so there how do you line that up in the life cycle of what the customers get is going to be different. We might be able to take different pieces from different from those 127, make it work at one point but the first time any of us makes a change, it's not coordinated with the other side, it's probably going to break. Anyone our life cycles go out 10 plus years and so engineering that altogether is something that makes it all work together as you upgrade whether it be hardware or your applications and so some people confuse that with not being old till 100 percent open. When we find a bug in rel, rel that's been out there for five years maybe we give that fix back to the upstream community that's open it's out there and so I think that's the part that this doesn't become so accepted now and so much part of the mainstream now that we very much confused projects with products and so that's one of the biggest confusion points out there. >> Yeah really good points there Paul. So when I think about some of the things we've heard over the years is in the original days it was, "Oh well public hug Paul? I'm not going to need rel anymore they've got Linux then kubernetes has come along and Red Hat's had a really strong position but you look at it and you say, "Okay well if I'm most customers, "if I'm doing Amazon, "if I'm doing Google, "if I'm doing Microsoft, "I'm probably going to end up using some of their native services that they've got built-in. Talk about how the role of Red Hat kind of continues to change and you live in this multi cloud environment and i think it's kind of that intersection that you were talking about, open and compatibility as opposed to. You're not saying that Red Hat's going to conquer the world and take down all the other options >> Well cloud providers bring a ton of value. I mean the users have to be smart on how and when they use that value. If you truly are going to be a hundred percent of your applications in one public cloud, then you probably will get the best solution from that one public cloud. Serverless is a great example if you're an Amazon and you spin up via services serverless that container that gets spun up is never going to run outside that Club, if that's okay with you that's okay with you. (Voice scrambles) The we've gone about this is as I said to give you that seamless environment all the way across. If you want to run just containers, (voice scrambles) on one particular cloud vendor and you want it under their kubernetes and it's never going to run in any other place, that's okay too but if you're going to have an environment with applications that are in multiple cloud vendors infrastructure you're even on your own, you're now going to have to spin up these different silos of that technology even though the technology as the same heritage. So that's a huge operational and development cost as you grow bigger and able to order to do that and so our set a strategy is very simple, it's give the developers operations and security people that common environment to work across and over time (voice scrambles) they shouldn't care where the services are coming from. It should just all work and that's why you seen things like automation being so important now. I mean our nation is our biggest growing business with ansible right now and part of the reason is as people spread out to a container based environment applications that may now spread across those different footprints maybe you want to have your front (voice scrambles) we have one of the rel customers in Europe that has the front facing customer side of their ticket, their ticketing system up in the public cloud and they've got the backend financial transaction database pieces that click credit cards behind their firewall, that's really one application spread across containers, if you have do you want to have to manage the front end of that with one kubernetes and the backend of that were the different kubernetes? Probably not and so that's really what we bring to the table as we've really grown in with this new technology. >> Alright, so final question I have for you Paul I'm actually going to get away a little bit from your background on the product piece you have to talk a little bit about just red hat going forward. So you talked about, we know for many years red hat has been much more than the Linux piece you talk about automation I've got some great interviews this week talking about some of the the latest in application development, lots of open source projects and so many open source projects (laughing) nobody can keep them all straight there. So as customers look at strategic partnerships, what is the role of red hat and with now being under IBM Jim white her steps over to become president there Arvind of course had a long relationship and it was the architect behind the Red Hat acquisition what's the same and what's different as we think about Red Hat 2020 under your leadership? >> I think it's a lot of the same I mean I think the the difference becomes in the world we're in right now is sort of how we can help our customers come out of back and back into re-entry right and so how that's going to to be different than the past (voice scrambles) we're working through that with many of our customers and we think we can be a big help here because we run their business and today where they run their business over the platforms on their business and that's not going to go away for them and in fact if anything that's going to get even more critical for them because they've got to get more automation to get just more efficiency out of it so in terms of what we do and as a company that's not going to change at all I mean we've been on this path that we're on for a long time. I stand up in front of our sales kickoffs every year is hearing and virtual as well and I say, "we'll to talk to you about the strategy." Guess what? It hasn't changed much from last year and that's a good thing because these technology rollouts are multi-year rollouts, so we're going to continue on that I mean the other thing too is, our customers are seeing moving many more of their work close to the Linux environment and so I think we can help them expand that as well and I think from an IBM perspective (voice scrambles) one of the big premises here from our perspective is to help us scale because they're in the process of helping their customers move to this next generation architectures and at the same time be able support the current architectures and that's what we do well and so they can just help us get to places that we just wouldn't have had the time and the resources maybe to get there get on our own so we can expand that footprint even more quickly with IBM. So that's the focus right now is to really help our customers move to the next phase of this in terms of re-entry >> Yeah as I've heard you and many other Red Hatters say Red Hat is still Red Hat and definitely it's something that we can see loud and clear at Red Hat summit 2020. Thank you so much Paul. >> Thank you Stu nice to see you again. >> All right lots of coverage from Red Hat summit 2020 be sure to check out the cube net for the whole back catalogue that we have of Paul their customers, there their partners and thank you for so watching the queue [Music]

Published Date : Apr 28 2020

SUMMARY :

brought to you by Red Hat. and really happy to bring back to the program, My pleasure, always great to see you Stu. but hugely important in the times were facing? and so in the long run as and what customers should think, and one of the main reasons and the vision that we always have is, and so I think that's also one of the and everybody feels that they understand the and the thing that Red Hat's done and it's interesting the mainframe analogy in the early days and so much part of the mainstream now and take down all the other options and part of the reason is as people spread out than the Linux piece you talk about automation and the resources maybe to get there get on our own and definitely it's something that we can see loud and clear

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul CormierPERSON

0.99+

EuropeLOCATION

0.99+

ArvindPERSON

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

MicrosoftORGANIZATION

0.99+

five yearsQUANTITY

0.99+

2007DATE

0.99+

Red HatORGANIZATION

0.99+

10 plus yearsQUANTITY

0.99+

$29.95QUANTITY

0.99+

GoogleORGANIZATION

0.99+

three daysQUANTITY

0.99+

LinuxTITLE

0.99+

Stu MinimanPERSON

0.99+

oneQUANTITY

0.99+

IBMORGANIZATION

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

StuPERSON

0.99+

100 percentQUANTITY

0.99+

CanonicalORGANIZATION

0.99+

UbuntuTITLE

0.99+

Red HatTITLE

0.99+

second pieceQUANTITY

0.99+

linuxTITLE

0.99+

both sidesQUANTITY

0.99+

Red Hat Summit 2020EVENT

0.99+

first pieceQUANTITY

0.98+

five silosQUANTITY

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

Linux kernelTITLE

0.98+

this yearDATE

0.98+

hundred percentQUANTITY

0.98+

one companyQUANTITY

0.98+

UNIXTITLE

0.98+

JimPERSON

0.98+

Red Hat summit 2020EVENT

0.98+

one platformQUANTITY

0.98+

first thingQUANTITY

0.98+

tomorrowDATE

0.97+

CUBEORGANIZATION

0.97+

firstQUANTITY

0.97+

SusaTITLE

0.97+

OneQUANTITY

0.97+

one pointQUANTITY

0.97+

second thingQUANTITY

0.96+

this weekDATE

0.96+

one cloudQUANTITY

0.96+

three thingsQUANTITY

0.95+

first applicationQUANTITY

0.95+

first timeQUANTITY

0.94+

Sizzle Reel | KubeCon+CloudNativeCon EU 2019


 

right so with kubernetes the history is we started off with only file systems block is something very new within the past couple releases that actually personally worked on the next piece that we're doing at Red Hat is leading the charge to create CRTs for object storage so it's defining those api's so customers can dynamically provision and manage their object storage with that in addition we recently acquired a company called nuba that does exactly that they're able to have that data mobility through object buckets across many clouds doing the sharding and replication with the ability to do and that's super important because it opens up for our customers to have image streams photos things like that that they typically use within an enterprise and quickly move the data and copy it as they as they need to so we notice that that more and more people want to try their workloads outside of the centralized one centralized data cluster so the big you know term for the last year was the hybrid cloud but it's not just hybrid cloud people coming from also from the iot user space wants to you know containerize their work clothes what wants to put the processing closer and closer to the devices that are actually producing and consuming those data in the users and there is a lot of use cases which should be tackled in in that way and as you all said previously like you Burnett is want developers hearts and minds so api's are stable everybody is using them it will be supported for decades so it's it's natural to try to bring all these tools and and all these platforms that are already you know available to developers try to tackle these new new challenges so that's why last year we reformed the kubernetes at the edge working group trying to you know start with the simple questions because when people come to you and say edge everybody thinks something different from somebody it's in IOT gateway for somebody it's a it's a full-blown you know kubernetes faster it's some telco providers so that's what we're trying to figure out all these and try to form a community because as we saw in the previous cell so for the IOT user space is that complex problems like these are never basically solved by single single company you need open source you need open standards you need the community around it so that people can pick and choose and build a solution to fit their needs yeah yeah so I care a lot about diversity in tech and women in tech more specifically one of the things that I I feel like this community has a lot of very visible women so when I actually looked at the number of contributors by by men and women I was really shocked to find out it was 3 percents it's kind of disappointing it's 3 percent of all the contributors to the all the projects in the CNCs it's only if you look at the 36 projects you look at the number of the people who've made issues commits comments pull requests it's 3 percent women and I think the CSUF has put a lot of effort into the for example of the diversity scholarships so bringing more than 300 people from underrepresented groups to cube corn including 56 here in Barcelona and it has a personal meaning to me because I really got my start through that diversity scholarship to keep calm Berlin two years ago and when I first came to keep on Berlin I knew nobody but just that little first step can go a long way into getting people into feeling like they're part of the community and they have something valuable to give back and then once you're in you're hooked on it and yeah then there's a lot of fun I think the ecosystem may finally be ready for it and this is I feel like it's easy for us to look at examples of the past you know people kind of shake their heads and OpenStack as a cautionary tale or sprawl and you know whatnot but this is a thriving which means growing which means changing which means a very busy ecosystem but like you're pointing out if your enterprises are gonna adopt some of this technology gee they look at it and everyone here was you know eating cupcakes or whatever for the kubernetes 5th birthday to an enterprise just because this got launched in 2014 you know ok June 2014 that sounds kind of new we're still running that mainframe that is still producing business value and actually that's fine I mean I think this maybe is one of the great things about a company like Microsoft is we are our customers like we also respect the fact that if something works you don't just Yolo a new thing out into production to replace it for what reason what is the business value of replacing it and I think for this that's why this kind of UNIX philosophy of the very modular pieces of this ecosystem and we were talking about how them a little earlier but there's also you know draft brigade you know etc like the porter the C NAB spec implementation stuff and this cloud native application bundles which that's a whole mouthful one of the things I like I've been a long history and open source too is if there are things that aren't perfect or things that are maturing a lot of times we're talking about them in public because there is a roadmap and you know people are working on it and we can all go to the repositories and you know see where people are complaining so at a show like this I feel like we do have some level of transparency and we can actually have realism here we I don't think we hear that as much anymore because there is no more barrier to getting the technology it's no longer I get this technology from vendor a and I wish somebody else would support the standard it's like I can get it if I want it I think the competition we typically have aren't about features anymore they're simply my business is driven by software let that's the way I interact with my customer that's the way I collect data for my customers whatever that is I need to do that faster and I need to teach my people to do that stuff so the technology becomes secondary like I have this saying it frustrates people so nice but I'm like there is not a CEO a CIO a CTO that you would talk to that wakes up and says I have a kubernetes problem they all go I have a I have this business problem I have that problem it happens to be software kubernetes is a detail sure I think the NSM is just a first step so the natural service is basically doing a couple of things one is it is simplifying networking so that the consumption paradigm is similar to what you see on the developer l7 layer so if you think SEO and how SEO is changing the game in terms of how you consume layer seven services think of bringing that down to the layer two layer three layer as well so the way a developer would discover services at the l7 layer is the same way we would want developers to discover networking endpoints or networking services or security capabilities that's number one so the language in which you consume needs to be simplified whereas it's whereby it becomes simple for developer to consume the second thing that I touched upon is we don't want developers to think about switches routers subnets BGP reacts van VLAN for me I want to take a little bit more into the idea of multi cloud I've been making a bit of a stink for the past year with a talk called the myths of multi cloud where it's not something I generally advise as a best practice and I'm holding to that fairly well but what I want to do is I won't have conversations with people who are pursuing multi-cloud strategies and figure out first are they in fact pursuing that the same thing that we're defining our terms and talking on the same page and secondly I want to get a little more context and insight into why they're doing that and what that looks like for them is it they want to be able to run different workloads in different places great that's fair the same workload run everywhere the lowest common denominator well let's scratch build a surface a bit and find out why that is bob wise and his team spent a ton of time working on the community and the whole the whole team does right for one of the the biggest contributors to @cd we're hosting birds of a feather we've committed we've contributed back to a fair amount of community projects and I think a lot of them are in fact around how to just make kubernetes work better on AWS and that might be something that we built because uks or it might be something like the like cluster autoscaler right which ultimately people would like to work better with with auto-scaling groups I think we we had the community involvement but I think it's about having a quiet community involvement right that it's it's about chopping wood and carrying water and being present and committing and showing up and having experts and answering questions and being present and things like say groups than it is necessarily having the biggest booth so Joe tremendous progress in five years look look forward for us a little bit you know what what what does you know kubernetes you know 2024 look like for us well you know a lot of folks like to say that you know in five years kubernetes is going to disappear and sometimes they come at this from the sort of snarky angle but other times I think you know it's gonna disappear in terms of like it's gonna be so boring so solid so assumed that people don't talk about it anymore I mean we're here at you know something that you know the the CNC F is part of the Linux Foundation which is great but you know how often do people really focus on the Linux kernel these days it is so boring so solid there's new stuff going on but like clearly all the exciting stuff all the action all the innovation is happening at higher layers and I think we're gonna see something similar happen with kubernetes over time exciting is being here if you rewind five years and tell me I'm ready in Barcelona with with 7,500 of my best friends I would think you were crazy or from Mars this is amazing and I thank everybody who's here who's made this thing possible we have a ton of work to do you know if you feel like you can't figure out what you need to work on come talk to me and we'll figure it out yet for me I just want to give a big thank you to all the maintain a nurse folks like Tim but also you know some other folks who you may not know their name but they're the ones slogging it out and to get up PRQ you know trying to just you know make the project's work in function day today and we're it not for their ongoing efforts we wouldn't have any of this you [Music]

Published Date : Feb 24 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
June 2014DATE

0.99+

7,500QUANTITY

0.99+

2014DATE

0.99+

MicrosoftORGANIZATION

0.99+

3 percentQUANTITY

0.99+

BarcelonaLOCATION

0.99+

36 projectsQUANTITY

0.99+

TimPERSON

0.99+

3 percentsQUANTITY

0.99+

more than 300 peopleQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

last yearDATE

0.99+

KubeConEVENT

0.99+

BerlinLOCATION

0.99+

last yearDATE

0.99+

CloudNativeConEVENT

0.99+

2024DATE

0.98+

five yearsQUANTITY

0.98+

56QUANTITY

0.98+

second thingQUANTITY

0.98+

first stepQUANTITY

0.98+

two years agoDATE

0.98+

todayDATE

0.97+

firstQUANTITY

0.97+

MarsLOCATION

0.97+

Red HatORGANIZATION

0.97+

Linux kernelTITLE

0.97+

oneQUANTITY

0.96+

layerTITLE

0.96+

iotTITLE

0.96+

AWSORGANIZATION

0.96+

CSUFORGANIZATION

0.95+

singleQUANTITY

0.94+

JoePERSON

0.94+

secondlyQUANTITY

0.94+

nubaORGANIZATION

0.92+

decadesQUANTITY

0.91+

telcoORGANIZATION

0.91+

uksORGANIZATION

0.85+

thingsQUANTITY

0.8+

past yearDATE

0.77+

lot of folksQUANTITY

0.76+

UNIXTITLE

0.74+

one ofQUANTITY

0.73+

a ton of timeQUANTITY

0.72+

C NABTITLE

0.71+

IOTTITLE

0.69+

couple releasesQUANTITY

0.69+

CNC FORGANIZATION

0.68+

NSMORGANIZATION

0.65+

5th birthdayQUANTITY

0.64+

a ton of workQUANTITY

0.64+

a lot of use casesQUANTITY

0.64+

BurnettORGANIZATION

0.62+

threeQUANTITY

0.59+

Sizzle ReelORGANIZATION

0.58+

EU 2019EVENT

0.58+

OpenStackTITLE

0.53+

my bestQUANTITY

0.51+

l7 layerTITLE

0.49+

twoQUANTITY

0.33+

sevenQUANTITY

0.32+

Brian Gracely, Red Hat | KubeCon + CloudNativeCon EU 2019


 

>> Live, from Barcelona, Spain, it's theCUBE, covering KubeCon and CloudNativeCon Europe, 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Welcome back. This is theCUBE at KubeCon CloudNativeCon 2019 here in Barcelona, Spain. I'm Stu Miniman, my co-host is Corey Quinn and welcoming back to the program, friend of the program, Brian Gracely who is the Director of Product Strategy at Red Hat. Brian, great to see you again. >> I've been, I feel like I've been in the desert. It's three years, I'm finally back, it's good to be back on theCUBE. >> Yeah well, I feel like we've been traveling parallel paths a lot. TheCUBE goes to a lot of events. We do a lot of interviews but I think when you go to shows, you actually have more back-to-back meetings than we even do, so we feel you in the jet lag and a little bit of exhaustion. Thanks for making time. >> Yeah, it's great. I had dinner with you two weeks ago, I did a podcast with Corey a week ago, and now, due to the magic of the internet, we're all here together in one place. It's good. >> Absolutely. Well Brian, as we know at a show like this we all want to hold hands and sing Kubernetes Kumbaya. It's wonderful to see that all of the old fights of the past have all been solved by software in the cloud. >> They're all good, it's all good. Yeah, somebody said it's a cult. I think I heard Owen Rodgers said it's now officially a cult. Corey, you called it the Greek word for spending lots of money. >> Uh yeah, it was named after the Kubernetes, the Greek god of spending money on cloud services. >> So, Brian, you talk to a lot of customers here. As they look at this space, how do they look at it? There's still times that I hear them, "I'm using this technology and I'm using this technology, "and gosh darn it vendor, "you better get together and make this work." So, open-source, we'd love to say is the panacea, but maybe not yet. >> I don't think we hear that as much anymore because there is no more barrier to getting the technology. It's no longer I get this technology from vendor A and I wish somebody else would support the standard. It's like, I can get it if I want it. I think the conversations we typically have aren't about features anymore, they're simply, my business is driven by software, that's the way I interact with my customer, that's the way I collect data from my customers, whatever that is. I need to do that faster and I need to teach my people to do that stuff. So the technology becomes secondary. I have this saying and it frustrates people sometimes, but I'm like, there's not a CEO, a CIO, a CTO that you would talk to that wakes up and says, "I have a Kubernetes problem." They all go, "I have a, I have this business problem, "I have that problem, it happens to be software." Kubernetes is a detail. >> Yeah Brian, those are the same people 10 years ago had a convergent problem, I never ran across them. >> If you screw up a Kubernetes roll-out, then you have a Kubernetes problem. But it's entertaining though. I mean, you are the Director of Product Strategy, which is usually a very hard job with the notable exception of one very large cloud company, where that role is filled by a post-it note that says simply, yes. So as you talk to the community and you look at what's going on, how are you having these conversations inform what you're building in terms of Openshift? >> Yeah, I mean, strategy you can be one of two things. You can either be really good at listening, or you can have a great crystal ball. I think Red Hat has essentially said, we're not going to be in the crystal ball business. Our business model is there's a lot of options, we will go get actively involved with them, we will go scratch our knees and get scars and stuff. Our biggest thing is, I have to spend a lot of time talking to customers going, what do you want to do? Usually there's some menu that you can offer them right now and it's really a matter of, do you want it sort of half-baked? Are you willing to sort of go through the learning process? Do you need something that's a little more finalized? We can help you do that. And our big thing is, we want to put as many of those things kind of together in one stew, so that you're not having-- Not you Stu, but other stews, thinking about like, I don't want to really think about them, I just want it to be monitored, I want the network to just work, I want scalability built in. So for us it's not so much a matter of making big, strategic bets, it's a matter of going, are we listening enough and piecing things together so they go, yeah, it's pretty close and it's the right level of baked for what I want to do right now. >> Yeah, so Brian, an interesting thing there. There's still quite a bit of complexity in this ecosystem. Red Hat does a good job of giving adult supervision to the environment, but, you know, when I used to think when row came out, it was like, okay, great. Back in the day, I get a CD and I know I can run this. Today here, if I talk to every Kubernetes customer that I run across and say okay, tell me your stack and tell me what service measure you're using, tell me which one of these projects you're doing and how you put them together. There's a lot of variation, so how do you manage that, the scale and growth with the individual configurations that everybody still can do, even if they're starting to do public clouds and all those other things? >> So, it's always interesting to me. I watch the different Keynotes and people will talk about all the things in their stack and why they had problems and this, that, and the other, and I kind of look at it and I'm like, we've solved that problem for you. Our thing is always, and I don't mean that sort of boastfully, but like, we put things together in what we think are pretty good defaults. It's the one probably big difference between Openshift and a lot of these other ones that are here is that we've put all those things together as sort of what we think are pretty good defaults. We allow some flexibility. So, you don't like the monitoring, you don't like Prometheus plugin splunk, that's fine. But we don't make you stand on your head. So for us, a lot of these problems that, our customers don't go, well, we can't figure out the stack, we can't do these things, they're kind of built in. And then their problem becomes okay, can I highly automate that? Did I try and make too many choices where you let me plug things in? And for us, what we've done, is I think if we went back a few years, people could say you guys are too modular, you're too plugable. We had to do that to kind of adapt to the market. Now we've sort of learned over time, you want to be immutable, you want to give them a little less choice. You want to really, no, if you're going to deploy an AWS, you got to know AWS really well. And that's, you know, not to make this a commercial, but that's basically what Openshift four became, was much more opinions about what we think are best practices based on about a thousand customers having done this. So we don't run into as many of pick your stack things, we run into that next level thing. Are we automating it enough? Do we scale it? How do we do statefulness? Stuff like that. >> Yeah, I'm curious in the Keynote this morning they called, you know, Kubernetes is a platform of platforms. Did that messaging resonate with you and your customers? >> Yeah, I think so, I mean, Kubernetes by itself doesn't really do anything, you need all this other stuff. So when I hear people say we deployed Kubernetes, I'm like, no you don't. You know, it's the engine of what you do, but you do a bunch of other stuff. So yeah, we like to think of it as like, we're platform builders, you should be a platform consumer, just like you're a consumer of Salesforce. They're a platform, you consume that. >> Yeah, one of the points made in the Keynote was how one provider, I believe it was IBM, please yell at me if I got that one wrong, talks about using Kubernetes to deploy Kubernetes. Which on the one hand, is super cool and a testament to the flexibility of how this is really working. On the other, it's-- and thus the serpent devours itself, and it becomes a very strange question of, okay, then we're starting to see some weird things. Where do we start, where do we look? Indeed.com for a better job. And it's one of those problems that at some point you just can't manage a head around complexities inside of complexities, but we've been dealing with that for 40 years. >> Yeah, Kubernetes managing Kubernetes is kind of one of those weird words like serverless, you're like what does that mean? I don't, it doesn't seem to, I don't think you mean what you want it to mean. The simplest way we explain that stuff, so... A couple of years ago there was a guy named Brandon Philips who had started a company called CoreOS. He stood up at Kube-- >> I believe you'll find it's pronounce CoreOS, but please, continue. >> CoreOS, exactly. Um, he stood up in the Seattle one when there was a thousand people at this event or 700, and he said, "I've created this pattern, "or we think there's a pattern that's going to be useful." The simplest way to think of it is, there's stuff that you just want to run, and I want essentially something monitoring it and keep it in a loop, if you will. Kubernetes just has that built in. I mean, it's kind of built in to the concept because originally Google said, "I can't manage it all myself." So that thing that he originally came up with or codified became what's now called operators. Operators is that thing now that's like okay, I have a stateful application. It needs to do certain things all the time, that's the best practice. Why don't we just build that around it? And so I think you heard in a lot of the Keynotes, if you're going to run storage, run it as an operator. If you're going to run a database, run it as an operator. It sounds like inception, Kubernetes running-- It's really just, it's a health loop that's going on all the time with a little bit of smarts that say hey, if you fail, fail this way. I always use the example like if I go to Amazon and get RDS, I don't get a DVA, there's no guy that shows up and says, "Hey, I'm your DVA." You just get some software that runs it for you. That's all this stuff is, it just never existed in Kubernetes before. Kubernetes has now matured enough to where they go, oh, I can play in that world, I can make that part of what I do. So it's less scary, it sounds sort of weird, inception-y. It's really just kind of what you've already gotten out of the public cloud now brought to wherever you want it. >> Well, one of the concerns that I'm starting to see as well is there's a level of hype around this. We've had a lot of conversations around Kubernetes today and yesterday, to the point where you can almost call this Kubernetes and friends instead of CloudNativeCon. And everyone has described it slightly differently. You see people describing it as systemd, as a kernel, sometimes as the way and the light, and someone on stage yesterday said that we all are familiar with the value that Kubernetes has brought to our jobs and our lives, is I think was the follow-up to that, which is a little strange. And I got to thinking about that. I don't deny that it has brought value, but what's interesting to me about this is I don't think I've heard two people define its value in the same terminology at all, and we've had kind of a lot of these conversations. >> So obviously not a cult because they would all be on message if it was a cult. >> Yeah, yeah yeah yeah. >> It's a cult with very crappy brand control, maybe. We don't know. >> I always just explain it that like, you know, if I went back 10 years or something, people... Any enterprise said hey, I would love to run like Google or like Amazon. Apparently for every one admin, I can manage a thousand servers and in their own data centers it's like well, I have one guy and he manages five, so I have cloud envy. >> We tried to add a sixth and he was crushed to death. Turns out those racks have size and weight limits. >> That's right, that's right. And so, people, they wanted this thing, they would've paid an arm and a leg for it. You move forward five years from that and it's like oh, Google just gave you their software, it's now available for free. Now what are you going to do with it? I gave you a bunch of power. So yeah, depending on how much you want to drink the Kool-Aid you're like, this is awesome, but at the end of the day you're just like, I just want the stuff that is available to, that's freely, publicly available, but for whatever reason, I can't be all in on one cloud, or I can't be all in on a public cloud, which, you believe in that there's tons of economic value about it, there's just some companies that can't do that. >> And I fully accept that. My argument has always been that it is, I think it's a poor best practice. When you have a constraint that forces you to be in multiple cloud providers, yes, do it! That makes absolute perfect sense. >> Right, if it makes sense, do it. And that's kind of what we've always said look, we're agnostic to that. If you want to run it, if you want to run it in a disconnected mode on a cruise ship, great, if it makes sense for you. If you need to run, you know, like... The other thing that we see-- >> That cruise ship becomes a container ship. >> Becomes a container ship. I had an interesting conversation with the bank last night. I had dinner with the bank. We were talking, they said, look, I run some stuff locally where I'm at, 'cause I have to, and then, we put a ton of stuff in AWS. He told me this story about a batch processing job that cost him like $4 or $5 million today. He does a variant of it in Lambda, and it cost him like $50 a month. So we had this conversation and it's going like, I love AWS, I want to be all in at AWS. And he said, here's my problem. I wake up every morning worried that I'm going to open the newspaper and Amazon, not AWS, Amazon is going to have moved closer into the banking industry than they are today. And so I have to have this kind of backup plan if you will. Backup's the wrong word, but sort of contingency plan of if they stop being my technology partner and they start becoming my competitor, which, there's arguments-- >> And for most of us I'd say that's not a matter of if, but when. >> Right, right. And some people live with it great. Like, Netflix lives with it, right? Others struggle. That guy's not doing multi-cloud in the future, he's just going, I would like to have the technology that allows me if that comes along. I'm not doing it to do it, I'd like the bag built in. >> So Brian, just want to shift a little bit off of kind of the mutli-cloud discussion. The thing that's interest me a lot, especially I've talked to a number of the Openshift customers, it is historically, infrastructure was the thing that slowed me down. We understand, oh, I want to modernize that. No, no wait. The back in thing or you know, provisioning, these kind of things take forever. The lever of this platform has been, I can move faster, I can really modernize my environment, and, whether that's in my data center or in one public cloud and a couple of others, it is that you know, great lever to help me be able to do that. Is that the right way to think about this? You've talked to a lot of customers. Is that a commonality between them? >> I think we see, I hate to give you a vendor answer, but we tend to see different entry points. So for the infrastructure people, I mean the infrastructure people realize in some cases they're slow, and a lot of cases the ones that are still slow, it's 'cause of some compliance thing. I can give you a VM in an hour, but I got to go through a process. They're the ones that are saying, look, my developers are putting stuff in containers or we're downloading, I just need to be able to support that. The developers obviously are the ones who are saying, look, business need, business problem, have budget to do something, That's usually the more important lever. Just faster infrastructure doesn't do a whole lot. But we find more and more where those two people have to be in the room. They're not making choices independently. But the ones that are successful, the ones that you hear case studies about, none of them are like, we're great at building containers. They're great at building software. Development drives it, infrastructure still tends to have a lot of the budget so they play a role in it, but they're not dictating where it goes or what it does. >> Yeah, any patterns you're seeing or things that customers can do to kind of move further along that spectrum? >> I think, I mean there's a couple of things, and whether you fit in this or not, number one, nobody has a container problem. Start with a business problem. That's always good for technology in general, but this isn't a refresh thing, this is some business problem. That business problem typically should be, I have to build software faster. We always say... I've seen enough of these go well and I've seen enough go poorly. There's, these events are great. They're great in the sense of people see that there's progress, there's innovation. They're also terrible because if you walk into this new, you feel like, man, everybody understands this, it must be pretty simple. And what'll happen is they start working on it and they realize, I don't know what I'm doing. Even if they're using Openshift and we made it easy, they don't know what they're doing. And then they go, I'm embarrassed to ask for help. Which is crazy because if you get into open source the community's all there to help. So it's always like, business problem, ask for help early and often, even if it embarrasses you. Don't go after low-hanging fruit, especially if you're trying to get further investment. Spinning up a bunch of web clusters or hello worlds doesn't, nobody cares anymore. Go after something big. It basically forces your organization to be all in. And then the other thing, and this is the thing that's never intuitive to IT teams, is you, at the point where you actually made something work, you have to look more like my organization than yours, which is basically you have to look like a software marketing company, because internally, you're trying to convince developers to come use your platform or to build faster or whatever, you actually have to have internal evangelist and for a lot of them, they're like, dude, marketing, eh, I don't want anything to do with that. But it's like, that's the way you're going to get people to come to your new way of doing things. >> Great points, Brian. I remember 15 years ago, it was the first time I was like wait, the CIO has a marketing person under him to help with some of those transformations? Some of the software roles to do. >> Yeah, it's the reason they all want to come and speak at Keynotes and they get at the end and they go, we're hiring. It's like, I got to make what I'm doing sound cool and attract 8,000 people to it. >> Well absolutely it's cool here. We really appreciate Brian, you sharing all the updates here. >> Great to see you guys again. It's good to be back. >> Definitely don't be a stranger. So for Corey Quinn, I'm Stu Miniman. Getting towards the end. Two days live, wall-to-wall coverage here at KubeCon, CloudNativeCon 2019. Thanks for watching theCUBE. (rhythmic music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, Brian, great to see you again. it's good to be back on theCUBE. but I think when you go to shows, I had dinner with you two weeks ago, have all been solved by software in the cloud. Corey, you called it the Greek word the Greek god of spending money on cloud services. So, Brian, you talk to a lot of customers here. that you would talk to that wakes up and says, Yeah Brian, those are the same people 10 years ago I mean, you are the Director of Product Strategy, I have to spend a lot of time talking to customers going, to the environment, but, you know, But we don't make you stand on your head. Did that messaging resonate with you and your customers? You know, it's the engine of what you do, that at some point you just can't manage a head I don't think you mean what you want it to mean. I believe you'll find it's pronounce CoreOS, brought to wherever you want it. And I got to thinking about that. because they would all be on message if it was a cult. It's a cult with very crappy brand control, maybe. I always just explain it that like, you know, We tried to add a sixth and he was crushed to death. and it's like oh, Google just gave you their software, When you have a constraint that forces you if you want to run it in a disconnected mode on a cruise ship, And so I have to have this kind of backup plan if you will. And for most of us I'd say I'm not doing it to do it, I'd like the bag built in. it is that you know, I think we see, I hate to give you a vendor answer, and whether you fit in this or not, Some of the software roles to do. Yeah, it's the reason they all want to come We really appreciate Brian, you sharing Great to see you guys again. So for Corey Quinn, I'm Stu Miniman.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

AmazonORGANIZATION

0.99+

Brian GracelyPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

$4QUANTITY

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

40 yearsQUANTITY

0.99+

NetflixORGANIZATION

0.99+

$5 millionQUANTITY

0.99+

Owen RodgersPERSON

0.99+

yesterdayDATE

0.99+

GoogleORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

two peopleQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

8,000 peopleQUANTITY

0.99+

TodayDATE

0.99+

KubeConEVENT

0.99+

five yearsQUANTITY

0.99+

CoreyPERSON

0.99+

SeattleLOCATION

0.99+

sixthQUANTITY

0.99+

two weeks agoDATE

0.99+

oneQUANTITY

0.98+

PrometheusTITLE

0.98+

KubernetesPERSON

0.98+

one guyQUANTITY

0.98+

todayDATE

0.98+

Two daysQUANTITY

0.98+

a week agoDATE

0.98+

Kubernetes KumbayaTITLE

0.98+

three yearsQUANTITY

0.98+

an hourQUANTITY

0.97+

KubernetesTITLE

0.97+

Brandon PhilipsPERSON

0.97+

two thingsQUANTITY

0.97+

CoreOSTITLE

0.97+

$50 a monthQUANTITY

0.96+

15 years agoDATE

0.96+

LambdaTITLE

0.96+

GreekOTHER

0.96+

last nightDATE

0.96+

CloudNativeCon EU 2019EVENT

0.95+

CloudNativeCon 2019EVENT

0.94+

about a thousand customersQUANTITY

0.93+

10 yearsQUANTITY

0.93+

one cloudQUANTITY

0.93+

10 years agoDATE

0.91+

Openshift fourTITLE

0.91+

this morningDATE

0.9+

Joe Beda, VMware | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE. Covering KubeCon + CloudNativeCon Europe 2019. Brought to you by Red Hat, the CloudNative Computing Foundation and ecosystem partners. >> In mid-2014, announced the world, coming out of Google led by Joe Beda, sitting to my right, Brendan Burges and Craig McLucky, all Kube alumni. Kubernetes, which is the Greek for governor helmsman or captain and here we are, five years later at the show. I'm Stu Miniman and this is theCUBE's coverage of KubeCon + CloudNativeCon here in Barcelona, Spain. Joe you've got your title today is that you're a principal engineer at VMware of course, by way of acquisition through Heptio, but you are one of the people who helped start this journey that we are all on Kubernetes, thanks so much for joining us. >> Yeah, thank you so much for having me. >> Alright, so, the cake and the candles and the singing we'll hold for the parties later. We have Fippy and the gang have been watching our whole thing, for people who don't know there's a whole cartoon, books and stuffed animals and everything like that. Joe, when you started this merchandising, that was what you were starting, no. In all seriousness though, bring us back a little bit give us a little bit of historical context as to we've had you on the program a few times but yeah, here we are five years later was this what you were expecting? >> I mean when I remember Craig and Bren and I sitting around and we're like hey, we should do this as an open source project This is before we got approvals and got the whole thing started. And I think there was, like an idea in the back of our head, of like, this could be a big deal. You dream big a lot of times and you know that there's a reality and that it's not always going to end up being this. And so, I don't think anybody involved with Kubernetes in the early days really thought it was going to turn into what it has turned into. >> Yeah, so when we look at open source projects, I remember back a few years back, it was like to succeed you must have a phoenetical dictator that will make sure the community does this or wait we don't want too much vendor we're just going to let the user community take over and there's all these extremes out there, but these are complicated pieces. The keynote this morning the discussion was Kubernetes is a platform of platforms it's like I've got all of these APIs and by itself, Kubernetes doesn't do a lot. It is, what it enables and what things put together, so walk us through a little bit of that the mission, how it changed a bit and a little bit of the community and we'll go from there. >> Yeah, I think so early on one of the goals with Kubernetes from Google's point of view was to essentially take a lot of the ideas that had been incubated over about a decade, with respect to Borg and other things and so, a lot of the early folks who got involved in the project and worked on those systems and really bring that to the outside world as a way to actually start bridging the gap between what Googlers did and what the rest of the world did. We had a really good idea of what we were looking to get out of this system and that was widely shared based on experience across a bunch of relatively senior engineers. We brought in some of the Red Hat folks early on Clayton Coleman and some of the other folks who are still super involved in the project. I think there was enough of an understanding that we looked and said okay we got a lot of work to do let's just get this done. So, we didn't really need sort of the benevolent dictator because there was a shared understanding and we had senior engineers that were willing to make trade-offs to be able to go and move forward. So that I think was a key bit of the success early on. >> Alright, so you talked, it was pulling in some other vendor community there. Talk a little bit about how that ecosystem grew and when was user feedback part of that discussion? >> Yeah, I mean, when you say we pulled in the vendor we pulled in people who worked for vendors but we never really viewed it as, there was really from the beginning this idea of well what's good for the project? What's going to actually create sustainability and for the project, sort of project over vendor is really something that we wanted to establish. And that even came down to the name, right? Like, when we named the project, we could have called it Google XYZ or some sort of XYZ but we didn't want to do that because we wanted to establish it as an independent thing with a life of its own. And so, yeah, so we wanted to bring in those external ideas and I think early on, we did have some early users, we did listen to them but it really resonated with folks who could actually see where we were going. I think it took time for the rest of the world to really catch on with what the vision was. >> OK, when we look at today, there's a lot at the show that is on top of or next to or with Kubernetes it's not all about that piece. How do you balance what goes in it versus what goes with it? One of my favorite lines last year overall, was from you, saying Kubernetes is not a magic player it is not the be all and end all it is set with very specific guidelines. How do you avoid scope creep? As engineers it's always like, I don't know, we know how to do that piece of it better. >> So when we started out the project we didn't actually have a governance model. It was just a bunch of engineers that sort of worked well together. Over time and as the project grew, we knew that we needed to actually get some sort of structure in place. And so a bunch of us who had been there from the start got together, formed a steering committee, held elections. There's a secret architecture that we formed and these are the places where we can actually say what is Kubernetes what is Kubernetes not how do we actually maintain sort of good taste with how we actually approach this stuff and that's one of the ways that we try to contain scope creep. But also, I think everybody realizes that a thriving ecosystem whether officially part of the CNCF or adjacent to it, is good for everybody. Trying to hold on too tight is not going to be good for the project. >> So, Joe, tremendous progress in five years. Look forward for us a little bit. What does Kubernetes 2024 look like for us? >> Well a lot of folks like to say that in five years, Kubernetes is going to disappear. And sometimes they come at this from this sort of snarky angle. (chuckles) But other times, I think it's going to disappear in terms of like it's going to be so boring, so solid, so assumed that people don't talk about it anymore. I mean, we're here, at something that the CNCF is part of the Linux Foundation, which is great. But how often do people really focus on the Linux kernel these days? It is so boring, so solid, there's new stuff going on, but clearly, all the exciting stuff all the action, all the innovation is happening at higher layers. I think we're going to see something similar happen with Kubernetes over time. >> Yeah, that being said the reach of Kubernetes is further than ever. I was talking to this special interest group looking at edge computing and IoT people making the micro-cage version of this stuff when the team first got together, I mean, is you must look at and said there were many fathers, many parents of this solution, but, could you imagine the kind of the family and ecosystem that would have grown out of it? >> I think we knew that it could go there I mean, Google had some experience with this, I mean When Google bought YouTube, they had a problem where they had to essentially build out something that looked a little bit like a CDN. And so there were some examples of sort of like, how does technology, like Boar, adapt to an Edge type of situation. So, there was some experience to borrow we definitely knew that we wanted this thing to scale up and down. But I think that's a hallmark of these successful technologies is that they can be used in ways and in places that you really never thought about when you got started. So that's definitely true. >> Alright, Joe, want to give you the final word the contributors, the users, the ecosystem community, what do you say with five years of Kubernetes now in the books? >> I just want to send a huge thank you to everybody who made it happen. This is, it was started by Google it was started by a few of us early on. But, we really want to make it so that everybody feels like it's theirs. A lot of times Brendan Burns and me and Kelsey wrote a book together and I'll do signing and a lot of times I'll sign that and I'll say thank you for being a part of Kubernetes. Because I really feel like every user everybody who bets on it, everybody who shares their knowledge, they're really a big part of it. And so thank you to everybody who's a big part of Kubernetes. >> All right, well, Joe, thank you as always for sharing your knowledge with our community >> Thank you so much. >> We've been happy to be a small part in helping to spread the knowledge and everything going on here, so congratulations to the community on five years of Kubernetes and we'll be back with more coverage here from KubeCon + CloudNativeCon 2019. I'm Stu Miniman and thanks for watching theCUBE. (upbeat music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, and here we are, five years later at the show. as to we've had you on the program a few times and that it's not always going to end up being this. and a little bit of the community and we'll go from there. and really bring that to the outside world and when was user feedback part of that discussion? and for the project, sort of project over vendor or next to or with Kubernetes and that's one of the ways that we try Look forward for us a little bit. Well a lot of folks like to say of this solution, but, could you imagine the kind of and in places that you really never and I'll say thank you for being a part of Kubernetes. and we'll be back with more coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Craig McLuckyPERSON

0.99+

Brendan BurgesPERSON

0.99+

JoePERSON

0.99+

Stu MinimanPERSON

0.99+

Joe BedaPERSON

0.99+

KelseyPERSON

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

five yearsQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

KubeConEVENT

0.99+

CNCFORGANIZATION

0.99+

last yearDATE

0.99+

mid-2014DATE

0.98+

five years laterDATE

0.98+

Brendan BurnsPERSON

0.98+

OneQUANTITY

0.98+

VMwareORGANIZATION

0.98+

KubernetesTITLE

0.97+

oneQUANTITY

0.97+

Clayton ColemanPERSON

0.97+

todayDATE

0.97+

CraigPERSON

0.96+

firstQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

BorgPERSON

0.94+

XYZTITLE

0.92+

this morningDATE

0.92+

FippyPERSON

0.91+

CloudNativeCon 2019EVENT

0.89+

KubeORGANIZATION

0.88+

Linux kernelTITLE

0.87+

CloudNativeCon Europe 2019EVENT

0.86+

BrenPERSON

0.84+

CloudNativeCon EU 2019EVENT

0.84+

Kubernetes 2024TITLE

0.81+

Google XYZTITLE

0.81+

about a decadeQUANTITY

0.79+

HeptioORGANIZATION

0.78+

KubernetesORGANIZATION

0.73+

KubeCon +EVENT

0.73+

few years backDATE

0.73+

CloudNativeConEVENT

0.72+

KubernetesPERSON

0.7+

GreekOTHER

0.61+

BoarORGANIZATION

0.32+

Dejan Bosanac & Josh Berkus, Red Hat | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE. Covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back to theCUBE here in Barcelona, Spain. This is KubeCon, CloudNativeCon 2019. I'm Stu Miniman, my co-host for two days of wall-to-wall coverage is Corey Quinn. Joining us on the program we have two gentleman from Red Hat. To my right is Josh Berkas who's the Kubernetes community manager and sitting to his right is Dejan Bosanac who's a senior software engineer and as I said, both with Red Hat. Gentlemen, thanks so much for joining us. >> Well thank you. >> Thank you. >> All right. So Josh, a community manager in the Kubernetes space, so what brings you here to KubeCon and maybe explain to us and give the clarification on the shirt so that we can be educated to properly call this city and residence by, how they should be. >> Oh, so many things, so. I mean obviously, I'm here because the community is here, right? A very large community. We had a contributor summit on Monday. They had a couple hundred people, three hundred people at it. The important thing, when we talk about community in Kubernetes there's the general ecosystem community and then there's the contributor community. >> Right. >> And the latter is more with what I'm concerned with. Because even the contributor community by itself is quite large. As for the t-shirt, speaking of community, so we like to actually do special t-shirts for the contributor summits. I designed this one. Despite my current career, my academic background is actually in art. This is obviously a Moreau pastiche, but one of things I actually learned by doing this was I did a different version first, It said Barca on it, and then one of the folks from here is like, "Well that's the football team." That when they abbreviate the city, it's actually Barna. >> It was news to me. I am today years old when I found that out. >> Yes. >> So thank you very much for that. >> Yes, that was an additional four hours of drawing for me. >> All right. Go ahead Corey. >> So a while back, I had a tweet that went out that I knew was going to be taken in two different ways and you were one of the first people to come back on that in the second way. Everyone first thought I was being a snarky jerk. >> Yeah. Which, let's be honest, fair. >> Yeah. >> But what I said was that in five years no one is going to care about Kubernetes. >> Right. >> And your response was yeah, that's a victory condition. If you don't have to think or care about this, >> Yeah. >> that means it won >> Right. >> in a similar way that a lot of things have slipped >> Yeah. >> beneath the level of awareness. And I'm curious as to what both of you think about the idea of Kubernetes not, I'm not saying it loses in the marketplace, I don't think that that is likely at all, but at what point do people not have to think about it any more and what does that future look like? >> Yeah, I mean one of our colleagues noticed yesterday that this conference particularly is not about Kubernetes any more. So, you hear more about all the ecosystem. A lot of projects around it. So it certainly grew up above the Kubernetes. And so you see all the talks about service meshes and things we try to do for the edge computing and things like that. So it's not just the Kubernetes any more. It's a whole ecosystem of the products and projects around it. I think, it's a big success. >> Yeah. And I mean I'll say, talking sort of a longer view is, I can remember compiling my own Linux kernels. I can remember doing it on a weekly basis. Because you honestly had to, right? If you wanted certain devices to work you had to actually compile your own kernel. Now on my various servers and stuff that I do for testing and demos and development, I can't even tell you what kernel version I'm running. Because I don't care, right? And for core Kubernetes, like I said, if we get to that point of not needing to care about it of only needing to care about it when we're developing something, then that looks like victory to me. >> Josh, is there anything in the core contributor team that they have milestones and say "Hey, by the time we get to 2.0 or 3.0, you know Kubernetes is invisible?" >> Yeah, well it's spoken of more in terms of GA and API stability >> Yeah. >> because really, if you're going to back off and you're going to say, "What is Kubernetes?" Well, Kubernetes is, what the definition of Kubernetes is, is a bag of APIs. A very large bag of APIs, we do a lot of APIs but a bag of APIs and the less those APIs change in the future the closer we're getting to maturity and stability, right? Because we want people building new stuff around the APIs, not modifying the APIs themselves. >> Yeah well, to that end, last night, here at Barcelona time, a blog post came out from AWS where they set out a formalized deprecation strategy for their EKS product to keep up with the releases of Kubernetes. Now, AWS generally does not turn things off ever, which means that 500 years from now, two trunkless legs of stone in a desert will be balanced by an ELB classic. And we're never going to be rid of anything they've ever built, but if nothing else, you've impacted them to formalize a deprecation strategy that follows upstream, which is awesome. It's great to start seeing a world where you don't have to support older versions of things as your user base and your community informs you. It's nice to see providers breaking from their model to respond to what the community has done. And I can't imagine, for you, that's anything other than an unqualified success. >> All right, so, Dejan. >> Yeah? >> When we talk about dispersion of technology, you know, there are few issues that get people as excited these days as edge computing. So, tell us a little bit about what you're doing and the community's doing in the IOTN edge space. >> Yeah. So, we noticed that more and more people want to try their workloads outside of the centralized, mon-centralized data clusters, so the big term for the last year was the hyper-cloud, but it's not just hyper-cloud. People coming also from the IOT user space wants to, you know, containerize their workloads, wants to put the processing closer and closer to the devices that they're actually producing and presuming those data in the users. And there's a lot of use cases which should be tackled in that way. And as you all said previously, like Kubernetes won developers' hearts and minds so APIs are stable, everybody's using them, it will be supported for decades so it's natural to try to bring all these tools and all these platforms that are already available to developers, try to tackle these new challenges. So that's why last year we formed Kubernetes IT edge working group, trying to, you know, start with simple questions because when people come to you and say edge, everybody thinks something different. For somebody it's an IOT gateway, for somebody it's a full blown, you know, Kubernetes cluster at some telco provider. So that's what they're trying to figure out, all these things, and try to form a community because as we saw in the previous sales for the IOT users space is that complex problems like this are never basically solved by a single company. You need open source, you need open standard, you need community around it so that people can pick and choose and build a solution to fit their needs. >> Yes, so as you said, right, there is that spectrum of offerings everything from that telco down to, you know, is this going to be something sitting on a tower somewhere or, you know, the vast proliferation of IOT which, you know, we spent lots of time. So are you looking at all of these or are you pointing "Okay, we already have a telco working group over here, and, you know, we're going to work on the IOT thing." You know, where are we? What are the answers and starting point for people today? >> Yes, so we have a single working group for now and we try to bring in to people that are interested in this topic in general. So it's, one of the guys said like "Edge is everything that's not running in the center crowd right, so, we have a couple of interesting things happening at a moment, so future way guys have a cubics project and there're presented at this conference. We have a couple of sessions on that. That's basically trying to tackle this device age kind of' space, how to, you know, put Kubernetes' workload on the constrained device and over to constrained network kind of' problem. And we have a people like coming from the rancher, which provide their own, again, resource-constrained Kubernetes deployments, and we see a lot of developments here, but it's still, I think, early age and that's why we have like a working group which is something that we can build our community and work over the time to shape things and find the appropriate reference, architectural blueprints for people that can follow in the future. >> Yeah, I think that there's been an awful lot of focus here on this show on Kubernetes, but it is KubeCon plus CloudNativeCon. I'm curious as far as what you're seeing with these conversations, something you eluded to as well is that there's now a bunch of other services that are factored in. I mean, it feels almost like this show is become, just from conversations, Kubernetes and friends; but, the level of attention that being paid to those friends is dramatically increasing. And I'm curious as to how you're seeing this evolve in the community particularly but also with customers and what you're seeing as this entire ecosystem continues to evolve. >> Yeah. Well, I mean part of it out of necessity, right, as when Kubernetes' move from Dev and experimental into production, you don't run Kubernetes by itself, right? And some of the things with Kubernetes is you can run with existing tooling, rank cloud providers, that sort of thing. But other things you discover that you want new tools. For example, one of the areas that we saw, expansion to start with, was the area of monitoring and telemetry because it turns out that monitoring telemetry that you build for a hundred servers does not work with twenty thousand pods. It's just a volume problem there. And so then we had new projects like Heapster and Prometheus and the new products from other companies like Sistic and that sort of thing, just looking at that space, right, in order to have that part of the tool because you can't be in production without monitoring and telemetry. One of my personal areas that I'm involved is storage, right, and so we've had the rook project here go from and pretty much a year and a half actually, go from being open sourced to being now a serious alternative solution if you don't want to be dependent on cloud provider storage. >> Please tell me you're giving that an award called Rookie of the Year. [laughs] >> I do not apologize for that one. One thing that does resonate with me though is the idea that you've taken, strategically, that instead of building all of this functionality into Kubernetes and turning it into, "You'll do it this way or you're going to be off in the wilderness somewhere," it's decoupled. I love that pattern. Was that always the design from day one or was this a contentious decision history? >> No, it wasn't. Kubernetes started out as kind of a monolith, right, because it was like the open source version of borg light, right, and, which was build as a monolith within Google 'cause there weren't options. They had to work with Google's stuff, right, if you're looking at borg, right, and so they're not worried about supporting all this other stuff, but from day one of Kubernetes being a project, it was a multi-company project, right, and if you look at, you know, open shift and open shift's users and open shift's stack, it's different from what Google uses for GKE. And, honestly, the easiest way to support sort of multiple stack layers is to decouple everything, right? And not how we started out, right? Cloud providers, like one of our problems cloud providers entry, storage entry, networking. Networking was the only thing that was separate from day one. You know but all this stuff was entry, and it didn't take very long for that to get unmaintainable, right? >> Well, I mean I think one of the, I've been following you and running into you in the conference circuit for years, and one of the talks I gave for a year and a half was Heresy in the Church of Docker where we don't know what your problem is but Docker, Docker, Docker, Docker, Docker, and I gave a list of twelve or thirteen different reasons and things that were not being handled by Docker. And now, I've sunset that talk largely because 1) no one talks about Docker and it feels a bit like punching down, but more importantly, Cooper Netties has largely solved almost all of those. There are still a few exceptions here and there 'cause it turns out "Sorry, nothing is perfect and we've not yet found containersation utopia. Surprise!" But it's really come a very long way in a very short period of time. >> Yeah, what a lot of it is is decoupling 'cause the thing is that you can take it two ways, right, one is that potentially as an ecosystem Kubernetes solves almost anything. Some things like IOT are, you know, a lot more alpha state than others. And then if you actually look at just core Kubernetes, it's like what you would get off the Kubernetes' Kubernetes repo if you compiled it yourself, Kubernetes solves almost nothing. Like by itself, you can't do much with it other than test your patches. >> Right, in isolation, the big problem it solves is "Room is limited to 'I want a buzz wort on my resume.'" >> Yes. >> There needs to be more to it than that. >> So, and I think that's true in general 'cause like, you know, if you look at "why did Linux become the default server OS, right?" It became the default server OS because it was adaptable, right, because you would compile in your own stuff because we define posics and kernel module API's to make it easy for people to build their own stuff without needing to commit to Lin EX Kernel. >> Alright, so I'd to get both your thoughts just on the storage piece there because, you know, 1) you know, storage is a complex, highly fragmented ecosystem out there. Red Hat has many options out there, and, boy, when I saw the key note this morning, I thought he did a really good job of laying out the options but, boy, there's, you know, it's a complex multi fragmented stack with a lot of different options out there, and edge computing, the storage industry as a whole without even Kubernetes is trying to figure out how that works, so Dejan, maybe we start with you, and yeah. >> So yeah. I don't have any particular answers for you for today in that area, but what I want, to emphasize what Josh said earlier is that these API's and these modelization that is done in Kubernetes, it's one of the big important things for edge's vow because people coming there and saying "We should do this. Should we invent things or should we just try to reuse what's a basically very good, very well designed system?" So that's a starting point, like why do we want to start using Kubernetes for the edge computing? But for the storage questions, I would hand over to Josh. >> So, your problem with storage is not anything to do with Kubernetes in particular, but the fact that, like you said, the storage sort of stack ecosystem is a mess. It's more vendor. Everything is vendor specific. Things don't work even semantically the same, let alone like the same by API. And so, all we can do in the world of Kubernetes is make enabling storage for Kubernetes not any harder than it would have been to do it in some other system. >> Right, and look, the storage industry'd say, "No no. It's not a mess. It's just that there's a prolifera of applications our there. There is not one solution to fit them all and that's why we have block, we have file, we have object, we have all these various ways of doing things, so you're saying storage is hard, but storage with Kubernetes is no harder today. We're getting to that point. >> I would say it's a little harder today. And we're working on making it not any harder. >> All right, excellent. Well, Josh and Dejan, thank you so much for the updates. >> Thank you guys. Always appreciative of the community contributions. Look forward to hearing more about the, of course, the contributors always and as the Edge and IOT groups mature. Look forward to hearing updates in the future. Thank you. >> Cool. >> Thank you guys. >> Alright, for Corey Quinn, I'm Stu Miniman back with lots more coverage hear from KubeCon CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching theCube.

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, and sitting to his right is Dejan Bosanac so what brings you here to KubeCon because the community is here, right? And the latter is more with what I'm concerned with. I am today years old when I found that out. So thank you Yes, that was All right. in two different ways and you were one of the first people Yeah. no one is going to care about Kubernetes. If you don't have to think And I'm curious as to what both of you think And so you see all the talks about I can't even tell you what kernel version I'm running. "Hey, by the time we get to 2.0 or 3.0, but a bag of APIs and the less those APIs change where you don't have to support older versions of things and the community's doing in the IOTN edge space. for somebody it's a full blown, you know, Kubernetes cluster everything from that telco down to, you know, for people that can follow in the future. And I'm curious as to how you're seeing this evolve And some of the things with Kubernetes is you can run Rookie of the Year. Was that always the design from day one a multi-company project, right, and if you look at, and one of the talks I gave for a year and a half was the thing is that you can take it two ways, right, one is Right, in isolation, the big problem it solves is "Room you know, if you look at "why did Linux become on the storage piece there because, you know, 1) you know, I don't have any particular answers for you like you said, the storage sort of stack ecosystem is Right, and look, the storage industry'd say, "No no. And we're working thank you so much for the updates. Always appreciative of the community contributions. Alright, for Corey Quinn, I'm Stu Miniman back with lots

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dejan BosanacPERSON

0.99+

Josh BerkasPERSON

0.99+

Corey QuinnPERSON

0.99+

JoshPERSON

0.99+

Stu MinimanPERSON

0.99+

Josh BerkusPERSON

0.99+

BarcaORGANIZATION

0.99+

DejanPERSON

0.99+

MondayDATE

0.99+

twelveQUANTITY

0.99+

AWSORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

two daysQUANTITY

0.99+

last yearDATE

0.99+

yesterdayDATE

0.99+

thirteenQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

GoogleORGANIZATION

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

CoreyPERSON

0.99+

second wayQUANTITY

0.99+

twenty thousand podsQUANTITY

0.99+

a year and a halfQUANTITY

0.99+

two waysQUANTITY

0.99+

KubeConEVENT

0.99+

three hundred peopleQUANTITY

0.99+

LinuxTITLE

0.99+

SisticORGANIZATION

0.99+

firstQUANTITY

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

four hoursQUANTITY

0.98+

CloudNativeConEVENT

0.98+

telcoORGANIZATION

0.98+

OneQUANTITY

0.97+

todayDATE

0.97+

Cooper NettiesPERSON

0.97+

last nightDATE

0.97+

Ecosystem PartnersORGANIZATION

0.97+

CloudNativeCon EU 2019EVENT

0.96+

two different waysQUANTITY

0.96+

CloudNativeCon 2019EVENT

0.95+

first peopleQUANTITY

0.95+

two gentlemanQUANTITY

0.94+

Kubernetes'PERSON

0.94+

One thingQUANTITY

0.94+

Lin EX KernelTITLE

0.94+

CloudNativeConTITLE

0.93+

one solutionQUANTITY

0.93+

Jason Bloomberg, Intellyx | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE! Covering KubeCon and CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Welcome back. This is theCUBE's live coverage of KubeCon, CloudNativeCon 2019 here in Barcelona, Spain. 7,700 here in attendance, here about all the Cloud Native technologies. I'm Stu Miniman; my cohost to the two days of coverage is Corey Quinn. And to help us break down what's happening in this ecosystem, we've brought in Jason Bloomberg, who's the president at Intellyx. Jason, thanks so much for joining us. >> It's great to be here. >> All right. There's probably some things in the keynote I want to talk about, but I also want to get your general impression of the show and beyond the show, just the ecosystem here. Brian Liles came out this morning. He did not sing or rap for us this morning like he did yesterday. He did remind us that the dinners in Barcelona meant that people were a little late coming in here because, even once you've got through all of your rounds of tapas and everything like that, getting that final check might take a little while. They did eventually filter in, though. Always a fun city here in Barcelona. I found some interesting pieces. Always love some customer studies. Conde Nast talking about what they've done with their digital imprint. CERN, who we're going to have on this program. As a science lover, you want to geek out as to how they're finding the Higgs boson and how things like Kubernetes are helping them there. And digging into things like storage, which I worked at a storage company for 10 years. So, understanding that storage is hard. Well, yeah. When containers came out, I was like, "Oh, god, we just fixed it for virtualization, "and it took us a decade. "How are we going to do it this time?" And they actually quoted a crowd chat that we had in our community. Tim Hawken, of course one of the first Kubernetes guys, was in on that. And we're going to have Tim on this afternoon, too. So, just to set a little context there. Jason, what's your impressions of the show? Anything that has changed in your mind from when you came in here to today? Let's get into it from there. >> Well, this is my second KubeCon. The first one I went to was in Seattle in December. What's interesting from a big picture is really how quickly and broadly KubeCon has been adopted in the enterprise. It's still, in the broader scheme of things, relatively new, but it's really taking its place as the only container orchestrator anybody cares about. It sort of squashed the 20-or-so alternative container orchestrators that had a brief day in the sun. And furthermore, large enterprises are rapidly adopting it. It's remarkable how many of them have adopted it and how broadly, how large the deployment. The Conde Nast example was one. But there are quite a number. So we turned the corner, even though it's relatively immature technology. That's the interesting story as well, that there's still pieces missing. It's sort of like flying an airplane while you're still assembling it, which makes it that much more exciting. >> Yeah, one of the things that has excited me over the last 10 years in tech is how fast it takes me to go from ideation to production, has been shrinking. Big data was: "Let's take the thing that used to take five years "and get it down to 18 months." We all remember ERP deployments and how much money and people you need to throw at that. >> It still takes a lot of money and people. >> Right, because it's ERP. I was talking to one of the booths here, and they were doing an informal poll of, "How many of you are going to have Kubernetes "in production in the next six months?" Not testing it, but in production in the next six months, and it was more than half of the people were going to be ramping it up in that kind of environment. Anything architecturally? What's intriguing you? What's the area that you're digging down to? We know that we are not fully mature, and even though we're in production and huge growth, there's still plenty of work to do. >> An interesting thing about the audience here is it's primarily infrastructure engineers. And the show is aimed at the infrastructure engineers, so it's technical. It's focused on people who code for a living at the infrastructure level, not at the application level. So you have that overall context, and what you end up having, then, is a lot of discussions about the various components. "Here's how we do storage." "Here's how we do this, here's how we do that." And it's all these pieces that people now have to assemble, as opposed to thinking of it overall, from the broader context, which is where I like writing about, in terms of the bigger picture. So the bigger picture is really that Cloud Native, broadly speaking, is a new architectural paradigm. It's more than just an architectural trend. It's set of trends that really change the way we think about architecture. >> One interesting piece about Kubernetes, as well. One of the things we're seeing as we see Kubernetes start to expand out is, unlike serverless, it doesn't necessarily require the same level of, oh, just take everything you've done and spend 18 months rewriting it from scratch, and then it works in this new paradigm in a better way. It's much less of a painful conversion process. We saw in the keynote today that they took WebLogic, of all things, and dropped that into Kubernetes. If you can do it with something as challenging, in some respects, and as monolithic as WebLogic, then almost any other stack you're going to see winds up making some sense. >> Right, you mentioned serverless in contrast with Kubernetes, but actually, serverless is part of this Cloud Native paradigm as well. So it's broader than Kubernetes, although Kubernetes has established itself as the container orchestration platform of choice. But it's really an overall story about how we can leverage the best practices we've learned from cloud computing across the entire enterprise IT landscape, both in the cloud and on premises. And Kubernetes is driving this in large part, but it's bigger picture than the technology itself. That's what's so interesting, because it's so transformative, but people here are thinking about trees, not the forest. >> It's an interesting thing you say there, and I'm curious if you can help our community, Because they look at this, and they're like, "Kubernetes, Kubernetes, Kubernetes." Well, a bunch of the things sit on Kubernetes. As they've tried to say, it's a platform of platforms. It's not the piece. Many of the things can be with Kubernetes but don't have to be. So, the whole observability piece. We heard the merging of the OpenCensus, OpenTracing with OpenTelemetry. You don't have to have Kubernetes for that to be a piece of it. It can be serverless underneath it. It can be all these other pieces. Cloud Native architecture sits on top of it. So when you say Cloud Native architecture, what defines that? What are the pieces? How do I have to do it? Is it just, I have to have meditated properly and had a certain sense of being? What do we have to do to be Cloud Native? >> Well, an interesting way of looking at it is: What we have subtracted from the equation, so what is intentionally missing. Cloud Native is stateless, it is codeless, and it is trustless. Now, not to say that we don't have ways of dealing with state, and of course there's still plenty of code, and we still need trust. But those are architectural principals that really percolate through everything we do. So containers are inherently stateless; they're ephemeral. Kubernetes deals with ephemeral resources that come and go as needed. This is key part of how we achieve the scale we're looking for. So now we have to deal with state in a stateless environment, and we need to do that in a codeless way. By codeless, I mean declarative. Instead of saying, how are we going to do something? Let's write code for that, we're going to say, how are we going to do that? Let's write a configuration file, a YAML file, or some other declarative representation of what we want to do. And Kubernetes is driven this way. It's driven by configuration, which means that you don't need to fork it. You don't need to go in and monkey with the insides to do something with it. It's essentially configurable and extensible, as opposed to customizable. This is a new way of thinking about how to leverage open-source infrastructure software. In the past, it was open-source. Let's go in an monkey with the code, because that's one of the benefits of open-source. Nobody wants to do that now, because it's declaratively-driven, and it's configurable. >> Okay, I hear what you're saying, and I like what you're saying. But one of the things that people say here is everyone's a little bit different, and it is not one solution. There's lots of different paths, and that's what's causing a little bit of confusion as to which service mesh, or do I have a couple of pieces that overlap. And every deployment that I see of this is slightly different, so how do I have my cake and eat it, too? >> Well, you mentioned that Kubernetes is a platform of platforms, and there's little discussion of what we're actually doing with the Kubernetes here at the show. Occasionally, there's some talk about AI, and there's some talk about a few other things, but it's really up to the users of Kubernetes, who are now the development teams in the enterprises, to figure out what they want to do with it and, as such, figure out what capabilities they require. Depending upon what applications you're running and the business use cases, you may need certain things more than others. Because AI is very different from websites, it's very different from other things you might be running. So that's part of the benefit of a platform of platforms, is it's inherently configurable. You can pick and choose the capabilities you want without having to go into Kubernetes and fork it. We don't want 12 different Kubernetes that are incompatible with each other, but we're perfectly okay with different flavors that are all based on the same, fundamental, identical code base. >> We take a look at this entire conference, and it really comes across as, yes, it's KubeCon and CloudNativeCon. We look at the, I think, 36 projects that are now being managed by this. But if we look at the conversations of what's happening here, it's very clear that the focus of this show is Kubernetes and friends, where it tends to be taking the limelight of a lot of this. One of the challenges you start seeing as soon as you start moving up the stack, out through the rest of the stack, rather, and seeing what all of these Cloud Native technologies are is, increasingly, they're starting to be defined by what they aren't. I mean, you have the old saw of, serverless runs on servers, and other incredibly unhelpful sentiments. And we talk about what things aren't more so than we do what they are. And what about capabilities story? I don't have an answer for this. I think it's one of those areas where language is hard, and defining what these things are is incredibly difficult. But I see what you're saying. We absolutely are seeing a transformative moment. And one of the strangest things about it, to me at least, is the enthusiasm with which we're seeing large enterprises, that you don't generally think of as being particularly agile or fast-moving, are demonstrating otherwise. They're diving into this in fascinating ways. It's really been enlightening to have conversations for the last couple of days with companies that are embracing this new paradigm. >> Right. Well, in our perspective at Intellyx, we're focusing on digital transformation in the enterprise, which really means putting the customer first and having a customer-driven transformation of IT, as well as the organization itself. And it's hard to think in those terms, in customer-facing terms, when you're only talking about IT infrastructure. Be that as it may, it's still all customer-driven. And this is sometimes the missing piece, is how do we connect what we're doing on the infrastructure side with what customers require from these companies that are implementing it? Often, that missing piece centers on the workload. Because, from the infrastructure perspective, we have a notion of a workload, and we want workload portability. And portability is one of the key benefits of Kubernetes. It gives us a lot of flexibility in terms of scalability and deployment options, as well as resilience and other benefits. But the workload also represents the applications we're putting in front of our end users, whether they're employees or end customers. So that's they key piece that is like the keystone that ties the digital story, that is the customer-facing, technology-driven, technology-empowered story, with the IT infrastructure stories. How do we support the flexibility, scalability, resilience of the workloads that the business needs to meet its business goals? >> Yeah, I'm really glad you brought up that digital transformation piece, because I have two questions, and I want to make sure I'm allowing you to cover both of them. One is, the outcome we from people as well: "I need to be faster, and I need to be agile." But at the same point, which pieces should I, as an enterprise, really need to manage? Many of these pieces, shouldn't I just be able to consume it as a managed service? Because I don't need to worry about all of those pieces. The Google presentation this morning about storage was: You have two options. Path one is: we'll take care of all of that for you. Path two is: here's the level of turtles that you're going to go all the way down, and we all know how complicated storage is, and it's got to work. If I lose my state, if I lose my pieces there, I'm probably out of business or at least in really big trouble. The second piece on that, you talked about the application. And digital transformation. Speed's great and everything, but we've said at Wikibon that the thing that will differentiate the traditional companies and the digitally transformed is data will drive your business. You will have data, it will add value of business, and I don't feel that story has come out yet. Do you see that as the end result from this? And apologies for having two big, complex questions here for you. >> Well, data are core to the digital transformation story, and it's also an essential part of the Kubernetes story. Although, from the infrastructure perspective, we're really thinking more about compute than about data. But of course, everything boils down to the data. That is definitely always a key part of the story. And you're talking about the different options. You could run it yourself or run it as a managed service. This is a key part of the story as well, is that it's not about making a single choice. It's about having options, and this is part of the modern cloud storage. It's not just about, "Okay, we'll put everything in one public cloud." It's about having multiple public clouds, private clouds, on-premises virtualization, as well as legacy environments. This is what you call hybrid IT. Having an abstracted collection of environments that supports workload portability in order to meet the business needs for the infrastructure. And that workload portability, in the context of multiple clouds, that is becoming increasingly dependent on Kubernetes as an essential element of the infrastructure. So Kubernetes is not the be-all and end-all, but it's become an essentially necessary part of the infrastructure, to make this whole vision of hybrid IT and digital transformation work. >> For now. I mean, I maintain that, five years from now, no one is going to care about Kubernetes. And there's two ways that goes. Either it dries up, blows away, and something else replaces it, which I don't find likely, or, more likely, it slips beneath the surface of awareness for most people. >> I would agree, yeah. >> The same way that we're not sitting here, having an in-depth conversation about which distribution of Linux, or what Linux kernel or virtual memory manager we're working with. That stuff has all slipped under the surface, to the point where there are people who care tremendously about this, but you don't need to employ them at every company. And most companies don't even have to think about it. I think Kubernetes is heading that direction. >> Yeah, it looks like it. Obviously, things continue to evolve. Yeah, Linux is a good example. TCP/IP as well. I remember the network protocol wars of the early 90s, before the web came along, and it was, "Are we going to use Banyan VINES, "are we going to use NetWare?" Remember NetWare? "Or are we going to use TCP/IP or Token Ring?" Yeah! >> Thank you. >> We could use GDP, but I don't get it. >> Come on, KOBOL's coming back, we're going to bring back Token Ring, too. >> KOBOL never went away. Token Ring, though, it's long gone. >> I am disappointed in Corey, here, for not asking the question about portability. The concern we have, as you say: okay, I put Kubernetes in here because I want portability. Do I end up with least-common-denominator cloud? I'm making a decision that I'm not going to go deep on some of the pieces, because nice as the IPI lets things through, but we understand if I need to work across multiple environments, I'm usually making a trade-off there. What do you hear from customers? Are they aware that they're doing this? Is this a challenge for people, not getting the full benefit out of whichever primary or whichever clouds they are using? >> Well, portability is not just one thing. It's actually a set of capabilities, depending upon what you are trying to accomplish. So for instance, you may want to simply support backing up your workload, so you want to be able to move it from here to there, to back it up. Or you may want to leverage different public clouds, because different public clouds have different strengths. There may be some portability there. Or you may be doing cloud migration, where you're trying to move from on-premises to cloud, so it's kind of a one-time portability. So there could be a number of reasons why portability is important, and that could impact what it means to you, to move something from here to there. And why, how often you're going to do it, how important it is, whether it's a one-to-many kind of thing, or it's a one-to-one kind of thing. It really depends on what you're trying to accomplish. >> Jason, last thing real quick. What research do you see coming out of this? What follow-up? What should people be looking for from Intellyx in this space in the near future? >> Well, we continue to focus on hybrid IT, which include Kubernetes, as well as some of the interesting trends. One of the interesting stories is how Kubernetes is increasingly being deployed on the edge. And there's a very interesting story there with edge computing, because the telcos are, in large part, driving that, because of their 5G roll-outs. So we have this interesting confluence of disruptive trends. We have 5G, we have edge computing, we have Kubernetes, and it's also a key use case for OpenStack, as well. So it's like all of these interesting trends are converging to meet a new class of challenges. And AI is part of that story as well, because we want to run AI at the edge, as well. That's the sort of thing we do at Intellyx, is try to take multiple disruptive trends and show the big picture overall. And for my articles for SiliconANGLE, that's what I'm doing as well, so stay tuned for those. >> All right. Jason Bloomberg, thank you for helping us break down what we're doing in this environment. And as you said, actually, some people said OpenStack is dead. Look, it's alive and well in the Telco space and actually merging into a lot of these environments. Nothing ever dies in IT, and theCUBE always keeps rolling throughout all the shows. For Corey Quinn, I'm Stu Miniman. We have a full-packed day of interviews here, so be sure to stay with us. And thank you for watching theCUBE. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, And to help us break down what's happening Tim Hawken, of course one of the first Kubernetes guys, and how broadly, how large the deployment. Yeah, one of the things that has excited me What's the area that you're digging down to? is a lot of discussions about the various components. One of the things we're seeing as we see Kubernetes but it's bigger picture than the technology itself. Many of the things can be with Kubernetes Now, not to say that we don't have But one of the things that people say here is You can pick and choose the capabilities you want One of the challenges you start seeing And portability is one of the key benefits of Kubernetes. One is, the outcome we from people as well: of the infrastructure, to make this whole vision beneath the surface of awareness for most people. And most companies don't even have to think about it. I remember the network protocol wars of the early 90s, we're going to bring back Token Ring, too. KOBOL never went away. because nice as the IPI lets things through, and that could impact what it means to you, What research do you see coming out of this? That's the sort of thing we do at Intellyx, And as you said, actually,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim HawkenPERSON

0.99+

JasonPERSON

0.99+

SeattleLOCATION

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

Brian LilesPERSON

0.99+

Jason BloombergPERSON

0.99+

12QUANTITY

0.99+

BarcelonaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

two questionsQUANTITY

0.99+

five yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

DecemberDATE

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

secondQUANTITY

0.99+

CERNORGANIZATION

0.99+

36 projectsQUANTITY

0.99+

20QUANTITY

0.99+

TimPERSON

0.99+

IntellyxORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

two waysQUANTITY

0.99+

second pieceQUANTITY

0.99+

OneQUANTITY

0.99+

two daysQUANTITY

0.99+

7,700QUANTITY

0.99+

KubeConEVENT

0.99+

two optionsQUANTITY

0.99+

KOBOLORGANIZATION

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.98+

one solutionQUANTITY

0.98+

LinuxTITLE

0.98+

GoogleORGANIZATION

0.98+

todayDATE

0.97+

KubernetesTITLE

0.97+

early 90sDATE

0.97+

Cloud NativeTITLE

0.96+

WikibonORGANIZATION

0.96+

more than halfQUANTITY

0.96+

this morningDATE

0.95+

CloudNativeCon Europe 2019EVENT

0.95+

one thingQUANTITY

0.95+

WebLogicTITLE

0.94+

first oneQUANTITY

0.94+

One interesting pieceQUANTITY

0.93+

Path oneQUANTITY

0.93+

single choiceQUANTITY

0.93+

this afternoonDATE

0.92+

CloudNativeCon 2019EVENT

0.92+

Path twoQUANTITY

0.92+

one of the boothsQUANTITY

0.92+

next six monthsDATE

0.91+

Linux kernelTITLE

0.9+

two bigQUANTITY

0.89+

theCUBE Insights | Red Hat Summit 2019


 

>> Announcer: Live from Boston, Massachusetts, it's theCUBE, covering Red Hat Summit 2019. Brought to you by Red Hat. >> Welcome back here on theCUBE, joined by Stu Miniman, I'm John Walls, as we wrap up our coverage here of the Red Hat Summit here in 2019. We've been here in Boston all week, three days, Stu, of really fascinating programming on one hand, the keynotes showing quite a diverse ecosystem that Red Hat has certainly built, and we've seen that array of guests reflected as well here, on theCUBE. And you leave with a pretty distinct impression about the vast reach, you might say, of Red Hat, and how they diversified their offerings and their services. >> Yeah, so, John, as we've talked about, this is the sixth year we've had theCUBE here. It's my fifth year doing it and I'll be honest, I've worked with Red Hat for 19 years, but the first year I came, it was like, all right, you know, I know lots of Linux people, I've worked with Linux people, but, you know, I'm not in there in the terminal and doing all this stuff, so it took me a little while to get used to. Today, I know not only a lot more people in Red Hat and the ecosystem, but where the ecosystem is matured and where the portfolio is grown. There's been some acquisitions on the Red Hat side. There's a certain pending acquisition that is kind of a big deal that we talked about this week. But Red Hat's position in this IT marketplace, especially in the hybrid and multi-cloud world, has been fun to watch and really enjoyed digging in it with you this week and, John Walls, I'll turn the camera to you because- >> I don't like this. (laughing) >> It was your first time on the program. Yeah, you know- >> I like asking you the questions. >> But we have to do this, you know, three days of Walls to Miniman coverage. So let's get the Walls perspective. >> John: All right. >> On your take. You've been to many shows. >> John: Yeah, no, I think that what's interesting about what I've seen here at Red Hat is this willingness to adapt to the marketplace, at least that's the impression I got, is that there are a lot of command and control models about this is the way it's going to be, and this is what we're going to give you, and you're gonna have to take it and like it. And Red Hat's just on the other end of that spectrum, right? It's very much a company that's built on an open source philosophy. And it's been more of what has the marketplace wanted? What have you needed? And now how can we work with you to build it and make it functional? And now we're gonna just offer it to a lot of people, and we're gonna make a lot of money doing that. And so, I think to me, that's at least what I got talking to Jim Whitehurst, you know about his philosophy and where he's taken this company, and has made it obviously a very attractive entity, IBM certainly thinks so to the tune of 34 billion. But you see that. >> Yeah, it's, you know, some companies say, oh well, you know, it's the leadership from the top. Well, Jim's philosophy though, it is The Open Organization. Highly recommend the book, it was a great read. We've talked to him about the program, but very much it's 12, 13 thousand people at the company. They're very much opinionated, they go in there, they have discussions. It's not like, well okay, one person pass this down. It's we're gonna debate and argue and fight. Doesn't mean we come to a full consensus, but open source at the core is what they do, and therefore, the community drives a lot of it. They contribute it all back up-stream, but, you know, we know what Red Hat's doing. It's fascinating to talk to Jim about, yeah you know, on the days where I'm thinking half glass empty, it's, you know, wow, we're not yet quite four billion dollars of the company, and look what an impact they had. They did a study with IDC and said, ten trillion dollars of the economy that they touch through RHEL, but on the half empty, on the half full days, they're having a huge impact outside. He said 34 billion dollars that IBM's paying is actually a bargain- >> It's a great deal! (laughing) >> for where they're going. But big announcements. RHEL 8, which had been almost five years in the works there. Some good advancements there. But the highlight for me this week really was OpenShift. We've been watching OpenShift since the early days, really pre-Kubernetes. It had a good vision and gained adoption in the marketplace, and was the open source choice for what we called Paths back then. But, when Kubernetes came around, it really helped solidify where OpenShift was going. It is the delivery mechanism for containerization and that container cluster management and Red Hat has a leadership position in that space. I think that almost every customer that we talked to this week, John, OpenShift was the underpinning. >> John: Absolutely. >> You would expect that RHEL's underneath there, but OpenShift as the lever for digital transformation. And that was something that I really enjoyed talking to. DBS Bank from Singapore, and Delta, and UPS. It was, we talked about their actual transformation journeys, both the technology and the organizational standpoint, and OpenShift really was the lever to give them that push. >> You know, another thing, I know you've been looking at this and watching this for many many years. There's certainly the evolution of open source, but we talked to Chris Wright earlier, and he was talking about the pace of change and how it really is incremental. And yet, if you're on the outside looking in, and you think, gosh, technology is just changing so fast, it's so crazy, it's so disruptive, but to hear it from Chris, not so. You don't go A to Z, you go A to B to C to D to D point one. (laughing) It takes time. And there's a patience almost and a cadence that has this slow revolution that I'm a little surprised at. I sense they, or got a sense of, you know, a much more rapid change of pace and that's not how the people on the inside see it. >> Yeah. Couple of comment back at that. Number one is we know how much rapid change there is going because if you looked at the Linux kernel or what's happening with Kubernetes and the open source, there's so much change going on there. There's the data point thrown out there that, you know, I forget, that 75% or 95% of all the data in the world was created in the last two years. Yet, only 2% of that is really usable and searchable and things like that. That's a lot of change. And the code base of Linux in the last two years, a third of the code is completely overhauled. This is technology that has been around for decades. But if you look at it, if you think about a company, one of the challenges that we had is if they're making those incremental change, and slowly looking at them, a lot of people from the outside would be like, oh, Red Hat, yeah that's that little Linux company, you know, that I'm familiar with and it runs on lots of places there. When we came in six years ago, there was a big push by Red Hat to say, "We're much more than Linux." They have their three pillars that we spent a lot of time through from the infrastructure layer to the cloud native to automation and management. Lots of shows I go to, AnsiballZ all over the place. We talked about OpenShift 4 is something that seems to be resonating. Red Hat takes a leadership position, not just in the communities and the foundations, but working with their customers to be a more trusted and deeper partner in what they're doing with digital transformation. There might have been little changes, but, you know, this is not the Red Hat that people would think of two years or five years ago because a large percentage of Red Hat has changed. One last nugget from Chris Wright there, is, you know, he spent a lot of time talking about AI. And some of these companies go buzzwords in these environments, but, you know, but he hit a nice cogent message with the punchline is machines enhance human intelligence because these are really complex systems, distributed architectures, and we know that the people just can't keep up with all of the change, and the scope, and the scale that they need to handle. So software should be able to be helping me get my arms around it, as well as where it can automate and even take actions, as long as we're careful about how we do it. >> John: Sure. There's another, point at least, I want to pick your brain about, is really the power of presence. The fact that we have the Microsoft CEO on the stage. Everybody thought, well (mumbles) But we heard it from guest after guest after guest this week, saying how cool was that? How impressive was that? How monumental was that? And, you know, it's great to have that kind of opportunity, but the power of Nadella's presence here, it's unmistakable in the message that has sent to this community. >> Yeah, you know, John, you could probably do a case study talking about culture and the power of culture because, I talked about Red Hat's not the Red Hat that you know. Well, the Satya Nadella led Microsoft is a very different Microsoft than before he was on board. Not only are they making great strides in, you know, we talk about SaaS and public cloud and the like, but from a partnership standpoint, Microsoft of old, you know, Linux and Red Hat were the enemy and you know, Windows was the solution and they were gonna bake everything into it. Well, Microsoft partnered with many more companies. Partnerships and ecosystem, a key message this week. We talked about Microsoft with Red Hat, but, you know, announcement today was, surprised me a little bit, but when we think about it, not too much. OpenShift supported on VMware environments, so, you know, VMware has in that family of Dell, there's competitive solutions against OpenShift and, you know, so, and virtualization. You know, Red Hat has, you know, RHV, the Red Hat Virtualization. >> John: Right, right, right. >> The old day of the lines in the swim lanes, as one of our guests talked about, really are there. Customers are living in a heterogeneous, multi-cloud world and the customers are gonna go and say, "You need to work together, before you're not gonna be there." >> Azure. Right, also we have Azure compatibility going on here. >> Stu: Yeah, deep, not just some tested, but deep integration. I can go to Azure and buy OpenShift. I mean that, the, to say it's in the, you know, not just in the marketplace, but a deep integration. And yeah, there was a little poke, if our audience caught it, from Paul Cormier. And said, you know, Microsoft really understands enterprise. That's why they're working tightly with us. Uh, there's a certain other large cloud provider that created Kubernetes, that has their own solution, that maybe doesn't understand enterprise as much and aren't working as closely with Red Hat as they might. So we'll see what response there is from them out there. Always, you know, we always love on theCUBE to, you know, the horse is on the track and where they're racing, but, you know, more and more all of our worlds are cross-pollinating. You know, the AI and AI Ops stuff. The software ecosystems because software does have this unifying factor that the API economy, and having all these things work together, more and more. If you don't, customers will go look for solutions that do provide the full end to end solution stuff they're looking for. >> All right, so we're, I've got a couple in mind as far as guests we've had on the show. And we saw them in action on the keynotes stage too. Anybody that jumps out at you, just like, wow, that was cool, that was, not that we, we love all of our children, right? (laughing) But every once in awhile, there's a story or two that does stand out. >> Yeah, so, it is so tough, you know. I loved, you know, the stories. John, I'm sure I'm going to ask you, you know, Mr. B and what he's doing with the children. >> John: Right, Franklin Middle School. >> And the hospitals with Dr. Ellen and the end of the brains. You know, those tech for good are phenomenal. For me, you know, the CIOs that we had on our first day of program. Delta was great and going through transformation, but, you know, our first guest that we had on, was DBS Bank in Singapore and- >> John: David Gledhill. >> He was so articulate and has such a good story about, I took outsourced environments. I didn't just bring it into my environment, say okay, IT can do it a little bit better, and I'll respond to business. No, no, we're going to total restructure the company. Not we're a software company. We're a technology company, and we're gonna learn from the Googles of the world and the like. And he said, We want to be considered there, you know, what was his term there? It was like, you know, bank less, uh, live more and bank less. I mean, what- >> Joyful banking, that was another of his. >> Joyful banking. You don't think of a financial institution as, you know, we want you to think less of the bank. You know, that's just a powerful statement. Total reorganization and, as we mentioned, of course, OpenShift, one of those levers underneath helping them to do that. >> Yeah, you mentioned Dr. Ellen Grant, Boston Children's Hospital, I think about that. She's in fetal neuroimaging and a Professor of Radiology at Harvard Medical School. The work they're doing in terms of diagnostics through imaging is spectacular. I thought about Robin Goldstone at the Livermore Laboratory, about our nuclear weapon monitoring and efficacy of our monitoring. >> Lawrence Livermore. So good. And John, talk about the diversity of our guests. We had expats from four different countries, phenomenal accents. A wonderful slate of brilliant women on the program. From the customer side, some of the award winners that you interviewed. The executives on the program. You know, Stefanie Chiras, always great, and Denise who were up on the keynotes stage. Denise with her 3D printed, new Red Hat logo earrings. Yeah, it was an, um- >> And a couple of old Yanks (laughing). Well, I enjoyed it, Stu. As always, great working with you, and we thank you for being with us as well. For now, we're gonna say so long. We're gonna see you at the next Red Hat Summit, I'm sure, 2020 in San Francisco. Might be a, I guess a slightly different company, but it might be the same old Red Hat too, but they're going to have 34 billion dollars behind them at that point and probably riding pretty high. That will do it for our CUBE coverage here from Boston. Thanks for much for joining us. For Stu Miniman, and our entire crew, have a good day. (funky music)

Published Date : May 9 2019

SUMMARY :

Brought to you by Red Hat. about the vast reach, you might say, of Red Hat, but the first year I came, it was like, all right, you know, I don't like this. Yeah, you know- But we have to do this, you know, You've been to many shows. And Red Hat's just on the other end of that spectrum, right? It's fascinating to talk to Jim about, yeah you know, and Red Hat has a leadership position in that space. and OpenShift really was the lever to give them that push. I sense they, or got a sense of, you know, and the scale that they need to handle. And, you know, it's great to have that kind of opportunity, I talked about Red Hat's not the Red Hat that you know. The old day of the lines in the swim lanes, Right, also we have Azure compatibility going on here. I mean that, the, to say it's in the, you know, And we saw them in action on the keynotes stage too. I loved, you know, the stories. and the end of the brains. And he said, We want to be considered there, you know, you know, we want you to think less of the bank. Yeah, you mentioned Dr. Ellen Grant, that you interviewed. and we thank you for being with us as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

JohnPERSON

0.99+

Stefanie ChirasPERSON

0.99+

David GledhillPERSON

0.99+

UPSORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

Chris WrightPERSON

0.99+

ChrisPERSON

0.99+

Jim WhitehurstPERSON

0.99+

BostonLOCATION

0.99+

DenisePERSON

0.99+

Robin GoldstonePERSON

0.99+

Red HatORGANIZATION

0.99+

Paul CormierPERSON

0.99+

John WallsPERSON

0.99+

IBMORGANIZATION

0.99+

75%QUANTITY

0.99+

DBS BankORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

19 yearsQUANTITY

0.99+

Lawrence LivermorePERSON

0.99+

MicrosoftORGANIZATION

0.99+

95%QUANTITY

0.99+

fifth yearQUANTITY

0.99+

NadellaPERSON

0.99+

SingaporeLOCATION

0.99+

34 billion dollarsQUANTITY

0.99+

Ellen GrantPERSON

0.99+

ten trillion dollarsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

34 billionQUANTITY

0.99+

twoQUANTITY

0.99+

IDCORGANIZATION

0.99+

Satya NadellaPERSON

0.99+

Boston Children's HospitalORGANIZATION

0.99+

three daysQUANTITY

0.99+

DellORGANIZATION

0.99+

RHEL 8TITLE

0.99+

EllenPERSON

0.99+

sixth yearQUANTITY

0.99+

Harvard Medical SchoolORGANIZATION

0.99+

WallsPERSON

0.99+

Boston, MassachusettsLOCATION

0.99+

Red HatTITLE

0.99+

first dayQUANTITY

0.99+

this weekDATE

0.99+

four billion dollarsQUANTITY

0.99+

LinuxTITLE

0.99+

six years agoDATE

0.98+

2020DATE

0.98+

first timeQUANTITY

0.98+

oneQUANTITY

0.98+

five years agoDATE

0.98+

OpenShiftORGANIZATION

0.98+

RHELTITLE

0.98+

OpenShiftTITLE

0.98+

Red Hat SummitEVENT

0.98+

StuPERSON

0.98+

todayDATE

0.98+

Franklin Middle SchoolORGANIZATION

0.98+

Paul Cormier, Red Hat | Red Hat Summit 2019


 

why from Boston Massachusetts it's the queue covering Red Hat summit 2019 watch you bye Red Hat well good morning welcome back to our live coverage here in Boston with the BCC and we're at Red Hat summit 2019 you're watching exclusive coverage here on the cube this is day three of three great days here at the summit's two minimun John wall's and we're joined now by Paul Cormier who's the president of products and technologies at Red Hat good morning Paul morning how are you doing I'm doing great great so are we a wonderful job on the on the keynote stage yesterday and we're gonna jump into that a little bit but I wanted to run something by you here a great man once said every great achievement begins with a bold goal I heard that I'm looking at that man yeah so one of the many statements that I thought really jumped out yesterday let's talk about that in terms of just the Red Hat philosophy what's happened with rl8 where you've gone with openshift for and just how that is embedded in your mind to how red hat goes about its business well you know we've we've we've been in the enterprise space for 17 plus years and prior to that red had you know we were basically through the retail through the retail channel but first and foremost Red Hat started as an open source company that's where they started not as an enterprise company once we decided with the bold goal that we're gonna get this into the enterprise that's what we really set you know really transformed into what you've maybe heard before from out of my mouth is where we're we're not an open source company although everything we do is open source for an enterprise software company with an open source development model that was kind of the beginning of the first bold goal let's get Linux to the enterprise and so that's sort of how we've thought about it from day one is let's take it one step at a time you know as I said get Linux in the enterprise make make rel the operating system in the enterprise now let's take on virtualization versus n then KVM and then as that all happens so much innovation happened around Linux that all these other pieces came you know Hadoop kubernetes all the other pieces so we just kept growing with that because it's all intertwined with Linux that's one step at a time so Paul before we get off this place I want you to put a fine point on it for our audience because you look out there you know open source is not a community it's lots of communities and it's not you know one thing it's many things out there and today people will look at there's certain companies how do I create IP and monetize what we're doing and you know where the project and the company are you know sometimes intertwined and licensing models changing you know Red Hat has a very simple philosophy on it and it's not something that's necessarily easily replicatable yeah I mean there's simple simple philosophy is it so it's it's upstream first that that's that's our philosophy yes we are a business and certainly making our products successful is is is important number number number one goal number zero goal before that is make the project successful our products can't be successful unless we're we're built on a successful project and it's not something that we even think about because it's just ingrained it's it's it's in our DNA so I mean I'll give you examples you know even kubernetes we didn't start the project Google started the project but we knew in order if we were going to incorporate that in a big way into our products that we had to be prominent in the community so that's what we did first and then it rolled out into the products it's just ingrained it's in the DNA yeah so let's talk a little bit about kubernetes openshift you've now got over a thousand customers congratulations on that and openshift for we spent a bunch of time talking with the team but let's start a little bit higher level because you know there's dozens of you know kubernetes options out there people look at is there interoperability between them you know in the early days customers would just spin their own pieces and on you know today every cloud provider has at least one option if not multiple options and there's all the independent how does this play out you know where are we along the maturity and how do all these pieces fit together or do they I mean if you look if you look at kubernetes I mean the thing here's the the good news the good news is open source has become so prominent in in everywhere we wear now ourselves included we make this mistake ourselves we've confused projects with products so kubernetes is a project it's a development project and we all talk about that like it's a product the same it's the same thing with Linux so I'll give you an example with the Linux kernel where all you know all the commercial vendors and everyone else is in that same upstream development tree with the Linux kernel but when the commercial guys like ourselves when we go to build a product we make choices of which file systems we're going to support which installers we're going to support you know what we're gonna do for management what we're gonna support for storage and for many reasons we all make different decisions so that's why at the end of the day when we come down to our products even though they're all completely open you know rel is different from Susu which is different from a bun too which is different from all the others it's the same exact thing with kubernetes we all develop here but now we bring that down into a platform like open shift that kubernetes touches userspace api's it such as kernel a api's and so unless you you integrate those and they all move forward in the lifecycle of that platform at the same time we get out of sync with each other and that's one of the reasons why it's a product and they don't necessarily work across each other with you know with all the other products it's the same exact principle that made rel and at the same exact principle how linux works right so what advice do you give to customers is how they look at this because they're like oh wait there's now azure an open shift this jointly offered solution but do I use that or Duty as the native you know aks solution out there you've got partnership to the AWS you know where does open shift versus anthos on google fit it's it definitely is a little bit fragmented well the other thing that's happened around the cloud one of the things that happened in early in the cloud a lot of the cloud providers said every applications going to the cloud tomorrow I think that was ten years ago and the last number I thought sorry we're about 20 percent there and so and that's great we think that's great but customers still have on-premise applications and they have a running on-premise either bare metal virtual machine they have their own private clouds in many cases and now they want to go across clouds every customer I talked to and it's not just for lock-in that's definitely an issue they want to go across clouds because this cloud provider might have a better service here than that cloud provider and vice versa so what customers want to do is they want one common operating environment both of the applications developer in the operators they can't afford to have five different silos because just like the example I used with Linux distributions being different every one of these kubernetes distributions is different and so anthos for example if you're gonna have all your applications including bare metal applications on Google Linux then that's good because your operators have one operating environment you developers have one development environment but that's impractical and that's why that's that's not gonna work I mean the reason why I think Microsoft is one of our best partners here is they understand this which is why they've embraced openshift so so deeply even though they have aks in their stable and the reason why I think they understand this is because they like us have been in the enterprise space for a long time this is how enterprise computing works and I think that's the model that our customers they don't have no choice to deploy they just can't afford to have five different you know operating environments it's like the UNIX days it's like the UNIX days all over again and you know when you had one vertical stack and you know customers started to roll out a common fact that's why Rell succeeded because we gave them that commonality and they couldn't afford five different silos to try to manage and develop their applications to you know is there a different rhythm or unique rhythm to the open source community in terms of development in terms of new products that might be a little different than then old older models because you know if I'm saying if I if there's an interest that focuses maybe in one area and the interests of ER you know or momentum shifts over to a different direction and and maybe this standard or this old way kind of loses a little bit of its impetus or its force I mean what that creates decision challenges on customer sign but but absolutely and and that's why as they said even with kubernetes we didn't jump in full force exactly right away you know we sort of we sort of worked in many of it with many container orchestration technologies out there most of which besides kubernetes are gone by the wayside a bit now and you know we sort of sort of look at that and see where this plays out well we get involved but we also try to make make the best technical decision as well kubernetes now it's got way too much momentum in in in the in with open source because it's got so much momentum that's where the innovation is happening and at the end of the day customers even though they have confused many projects with products they still want they still want the right technology to solve their business business problems right and so cuckoo Bernays has so much momentum around it that's where the innovation is happening so that's that's that's the plot that's the big part of the platform right now and so I think that's the other thing I think that a lot of people that try to jump into this space miss is if you're gonna base your enterprise product on an upstream project you better have good influence in that upstream project because when your customers ask you to address an issue or or take it in a direction or help take it in the direction if you don't have that influence you can't satisfy your customers so we learned very very early on that upstream is is not a bolt-on for us it's an integral part that starts even before the product starts so Paul I've heard many people often call Red Hat the Switzerland of IT you know being where you sit in the community and you know for years at this show we've interviewed you know all of the hardware players and everything like that sorry sorry I'm taking important calls it's no worries you know live audience can wait we'll show you the clip of John Cleese when we got interrupted on a program once we won't think was my admin telling me I needed to come here you're good but so you know with Red Hat starting as that as that Switzerland when I look at the multi cloud world its you've got interesting combination you know Satya Nadella up on stage is not something that we would have thought of right five years ago so you know VMware supporting OpenShift announced today is not something that many people will look at and be like oh geez you know that seems surprising to me because you know we have you know fights over virtualization or various piece of the stack what do you see in kind of the software and multi cloud world today that's maybe a little different than it was five or ten years ago I think I mean to VMware's credit they're trying to satisfy their customers and their customers are saying I want OpenShift and so we we work with trying to satisfy our customers to the Microsoft arrangement I mean as you guys probably well know we weren't the best of friends you know five six seven eight years ago and I think Satya said it on stage and they our customers got us together literally we had a set of big customers that almost took us in a room and said you guys need to talk and and frankly I think they're one of our best partners right now I'm not sure it could have happened without Satya but they're one of our best partners because we're both interested in satisfying our customers in and as I said I think Microsoft really understands the enterprise world and that's why we're going in the common direction we almost when we get in the room with their engineers we almost complete each other's sentences of you know when we start talking about what we need to do you know there's been an announcement early in the week ahead of a global economic study done IDC came up with this huge number right 10 trillion dollar impact that Linux is having globally speaking just if you would just curious about your perspective on that what kind of a statement that is and and the dollar values that are achieved or the incremental values that are achieved in terms of applying these technology I think it's a couple things I think I think it's a statement that this is the innovation most so open-source is the innovation model going forward period end of story full stop and I think as I said in my keynote yesterday you know leading up to the the biggest acquisition ever for a software company not an open-source software coming a software company that happened to be an open-source software company I don't think there's any doubt that that open source has one here here today it and it's because of the pace of innovation I mean yes I mean we've been at rel for 17 plus years well we probably spent the first third or so without 17 plus years trying to convince the world that Linux was secure and it was stable and it was ready for the enterprise once we got through that hurdle it was just off to the races from there and kubernetes what you know I said yesterday containers came on the scene although they've been here technically for a long time they came on the scene in 14 herba Nettie's in 15 it's only 2019 it's really not that far downstream where were as you said we've got a thousand commercial customers and the keynote this morning talking about some of the use cases that we're solving with with OpenShift I mean Boston Children's Hospital is just unbelievable of what they can do in a matter of a week that used to take them a matter of a month to do right that's because of the innovation model we have dr. Ellen Grant on yesterday by the way so if you haven't watched that yet go back to the cube net and check that interview out yeah I mean fascinating kind of customer conversation we've had about transformation but want to get your take on the only constant in our industry which is change I wrote right after the the announcement of the acquisition and meeting with your changes Red Hat the one thing that they've actually built themselves for is to deal with the massive amounts of change you know you could tell better than more how fast the Linux kernel is changing you know a third of the codes changed in the last two years and kubernetes is actually not as many lines of code as Linux but it's massive amounts of change I heard you know we relate out to about five years of development on that I heard the the pace going forward will only get faster every three years you're gonna have a major release every six months right a minor release so how do you get the team in the community and all these things you know ever keeping up and even turning it up to 11 that day that's that's probably the one of the biggest parts of our job our customers can't deal with that change you know frankly I think in the bidding beginning of OpenStack one of the one of the mistakes that we as a community did for our customers was there were some vendors out there trying to tell customers you need to stay close to the head to the upstream head you need to stay close to the head and we really all try to get things out in six months that's great to try to start to evaluate innovation and how what you can do with that it's not great for necessarily running a stable business on and that's what and that's what I think our job is is to help our customers consume open-source developed technologies in a way that they can continue to run their business and that was the goal that was the audacious goal of rel from the beginning is that the model of rel it's in it's no I it's it's not necessarily about the bits because they're free it's about the life cycle of that and how we can help our customers consume that and that's what we do that frankly it to the core well just to follow up on that if you ask your customer and you say hey you're using Azure what version you are using they're like Microsoft patches and updates that constantly as opposed to the traditional you know Patch Tuesday in Windows so you know we seem to be closing that gap a little but it's challenging between the stuff I control and the stuff that I consume well we'll look at even OpenShift for we used I mean I know ashesh was on yesterday talking about that but we used a lot of the great technology we got from core OS to start to bring that model bet on to even on premise if you so choose with open shift because there's so many of the components that are that are intertwined with each other you know you've got kubernetes with talking the user space talking the kernel user space talking to the kernel talking the storage talking to networking so now automating that for our customers for that updates is is is what they want because that's how they consume it in the cloud I remember when we first started rel we used to put the the features on the side of the box and the first thing was what version of the kernel it was that quickly went away - they don't want to have to worry about that because they don't have the expertise to do to be added' eyewire themselves well congratulations Paul great week thank you very much again well done now on the keynote stage yesterday fascinating stuff this morning - so well done on the program inside and we wish you look down the road and don't forget to check your voicemail no I will thank you guys very much might be important all right always a pleasure back with more here from Red Hat summit 2019 you're watching us live here on the Q [Music]

Published Date : May 9 2019

SUMMARY :

Hat the Switzerland of IT you know being

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+